--- Log opened Sat Mar 31 00:00:19 2012 | ||
-!- Vuvu [~Vivan_Ric@115.248.130.148] has quit [Quit: Leaving] | 00:23 | |
-!- blackburn [~qdrgsm@62.106.114.183] has quit [Ping timeout: 246 seconds] | 00:31 | |
n4nd0 | sonney2k: hey, still around? I am getting some weird results with one-vs-one and I don't know if they might make actually sense | 00:31 |
---|---|---|
@sonney2k | n4nd0, what is the problem? | 00:34 |
n4nd0 | sonney2k: so I am getting pretty bad classification results with one-vs-one using LibSVM + GaussianKernel | 00:35 |
n4nd0 | sonney2k: one-vs-rest success though | 00:35 |
@sonney2k | n4nd0, you could try to compare results with LibSVMMultiClass | 00:35 |
@sonney2k | it does OvO too | 00:35 |
n4nd0 | sonney2k: ok, thank you! | 00:36 |
@sonney2k | so results should be the same or similar at least | 00:36 |
@sonney2k | n4nd0, did you copy the OvO routine from CMultiClassSVM ? | 00:36 |
n4nd0 | sonney2k: yeah | 00:36 |
PhilTillet | sonney2k, I think I have a reason for the OpenCL being slow on your computer | 00:36 |
@sonney2k | PhilTillet, ok so how do I turn it on then? | 00:37 |
PhilTillet | hmmm, do you have the proprietary NVidia Drivers? | 00:38 |
PhilTillet | if so, then you should have your libopencl.so somewhere in an nvidia folder, for me it is /usr/lib/nvidia-current/libopencl.so | 00:39 |
PhilTillet | libOpenCL.so* | 00:39 |
@sonney2k | doesn't help | 00:43 |
@sonney2k | PhilTillet, any way to debug whether opencl is used? | 00:43 |
PhilTillet | you mean on which device it is used? | 00:44 |
@sonney2k | yes | 00:44 |
PhilTillet | there is a viennacl function for that, hmm wait a bit | 00:44 |
n4nd0 | sonney2k: the results with LibSVMMultiClass are good, classification correct there must be erros in one-vs-one :S | 00:45 |
PhilTillet | sonney2k, std::cout << viennacl::ocl::current_device().info(); | 00:45 |
@sonney2k | n4nd0, how do you map labels? | 00:45 |
@sonney2k | n4nd0, they should be in range 0...<nr classes-1> | 00:46 |
@sonney2k | and then you need to map them to +1 / -1 | 00:46 |
n4nd0 | n4nd0: yeah, that's how I do it | 00:46 |
n4nd0 | sonney2k: let me show you the code | 00:46 |
PhilTillet | n4nd0, you send messages to yourself? :p | 00:46 |
n4nd0 | shit, again talking to me :P | 00:46 |
n4nd0 | yeah ... I do it more often I want to admit | 00:47 |
n4nd0 | :D | 00:47 |
@sonney2k | PhilTillet, CL Device Vendor ID: 4098 | 00:47 |
@sonney2k | CL Device Name: Intel(R) Core(TM)2 Duo CPU T9900 @ 3.06GHz | 00:47 |
@sonney2k | CL Driver Version: 2.0 | 00:47 |
@sonney2k | so you are right | 00:47 |
@sonney2k | question is why the nvidia one is not taken?! | 00:47 |
PhilTillet | hmmm | 00:47 |
PhilTillet | there can be several reasons | 00:47 |
PhilTillet | do you have an nvidia.icd in your /etc/OpenCL/vendors/ ? | 00:48 |
@sonney2k | yes and an amdocl64.icd | 00:49 |
PhilTillet | I think the reason why is that the program is linked to AMD's ocl | 00:49 |
PhilTillet | AMD's OpenCL implementation does not detect the NVidia Card as a device | 00:49 |
PhilTillet | and NVidia's OCL does not detect the Intel CPU | 00:50 |
PhilTillet | (yes, it's sort of a headache XD) | 00:50 |
@sonney2k | let me remove the amd one | 00:50 |
@sonney2k | CL Device Vendor ID: 4318 | 00:50 |
@sonney2k | CL Device Name: GeForce 9600M GT | 00:50 |
@sonney2k | CL Driver Version: 295.20 | 00:50 |
@sonney2k | aha | 00:50 |
@sonney2k | CPU Apply time : 0.482976 | 00:50 |
@sonney2k | GPU Apply time : 0.249349 | 00:50 |
@sonney2k | yay! | 00:50 |
PhilTillet | hehe | 00:50 |
n4nd0 | sonney2k: there it is train_one_vs_one and classify_one_vs_one | 00:50 |
PhilTillet | that was the reason | 00:50 |
@sonney2k | CL Device Max Compute Units: 4 | 00:51 |
@sonney2k | bah | 00:51 |
@sonney2k | PhilTillet, now benchmark for 1000x100k and get the error down to <1e-6 and all good | 00:52 |
PhilTillet | sonney2k, okay :p | 00:52 |
PhilTillet | i'll do some tests on small inputs | 00:53 |
@sonney2k | n4nd0, where is that? | 00:53 |
PhilTillet | to see where the error comes from | 00:53 |
@sonney2k | makes sense | 00:53 |
n4nd0 | sonney2k: https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/machine/MulticlassMachine.cpp | 00:53 |
PhilTillet | sonney2k, float64_t* lab; | 00:57 |
PhilTillet | float32_t* feat; | 00:57 |
PhilTillet | float64_t* alphas; ... are you sure the computation is not done in double on the cpu? | 00:57 |
@sonney2k | PhilTillet, only alphas | 00:57 |
@sonney2k | but try on your monster GPU with double | 00:57 |
PhilTillet | lol, monster GPU :p | 00:57 |
@sonney2k | then difference should be 1e-16 | 00:57 |
PhilTillet | 1e-16 per example or total? | 01:00 |
@sonney2k | PhilTillet, total | 01:01 |
@sonney2k | n4nd0, I suspect subsets | 01:01 |
n4nd0 | sonney2k: why do you think so? | 01:03 |
@sonney2k | n4nd0, if you print the votes vector - does it show sth suspicious like just one class? | 01:03 |
n4nd0 | sonney2k: yes | 01:04 |
PhilTillet | sonney2k, so 100K vectors of dim 1000 and 1000 Support vectors? | 01:04 |
@sonney2k | PhilTillet, something that fits in memory :) | 01:04 |
@sonney2k | maybe you have to reduce dim | 01:05 |
@sonney2k | to 100 or so | 01:05 |
PhilTillet | yes, I'll see | 01:05 |
PhilTillet | Benchmarking cpu... | 01:05 |
PhilTillet | CPU Apply time : 18.7195 | 01:05 |
PhilTillet | Compiling GPU Kernels... | 01:05 |
PhilTillet | Benchmarking GPUs.. | 01:05 |
PhilTillet | GPU Apply time : 2.19845 | 01:05 |
PhilTillet | Total error : 0.001015 | 01:05 |
PhilTillet | on i7 950 vs GTX 470 | 01:06 |
@sonney2k | nice | 01:06 |
PhilTillet | now got to reduce the error, but it will be harder :p | 01:06 |
@sonney2k | well try with 2 dims and 1 SV and 1 output :) | 01:06 |
PhilTillet | yes, I tried | 01:06 |
PhilTillet | some vectors have 0.0000000 error | 01:06 |
PhilTillet | some have like 1e-6 | 01:07 |
PhilTillet | like the last digit of the floating point representation | 01:07 |
@sonney2k | n4nd0, I think the reason is that features can only have one subset | 01:07 |
@sonney2k | I think you would need to store the subset too | 01:07 |
@sonney2k | and assign it in apply | 01:07 |
@sonney2k | n4nd0, or do everything without subsets | 01:08 |
n4nd0 | sonney2k: I don't understand clearly, do you mean that the subsets cannot be set and remove for every machine? | 01:08 |
n4nd0 | sonney2k: how could it be done without subsets? | 01:09 |
@sonney2k | you assign the subset to the features right? | 01:09 |
n4nd0 | yes | 01:09 |
@sonney2k | so there can be only one such assignment | 01:10 |
@sonney2k | which will be the last one | 01:10 |
n4nd0 | set_machine_subset is define in LinearMulticlassMachine or KernelMulticlassMachine and it basically calls the method set_subset of for the features | 01:10 |
@sonney2k | yeah but does KernelMulticlassMachine store the subset? | 01:11 |
@sonney2k | or just assign it to features? | 01:11 |
n4nd0 | no, it doesn't store it | 01:11 |
n4nd0 | assign to features | 01:11 |
@sonney2k | if it assigns things to features only then that is the explanation | 01:11 |
@sonney2k | because in apply() you won't have the correct subset set | 01:12 |
n4nd0 | aham I see | 01:12 |
@sonney2k | so support vectors etc need to be converted back to the features w/o subset | 01:12 |
@sonney2k | I guess that is the best solution | 01:12 |
@sonney2k | there should be a method for that btw | 01:13 |
n4nd0 | but since the subset is removed at the end | 01:13 |
n4nd0 | then the subset shouldn't be there when apply is called right? | 01:13 |
@sonney2k | n4nd0, you could call CKernelMachine::store_model_features() | 01:14 |
@sonney2k | then it will store the SVs | 01:14 |
@sonney2k | and all good | 01:15 |
n4nd0 | mmm I think I am not getting the point correctly, I am sorry | 01:15 |
@sonney2k | n4nd0, yeah but the problem is that the kernel machine doesn't internally store the trained machine | 01:15 |
@sonney2k | it only stores indices to support vectors and alphas | 01:15 |
@sonney2k | so these sv-indicices will be relative to the subsets that were used | 01:16 |
@sonney2k | so the best solution is to make the svm store not only the sv-indices but the real feature vectors | 01:16 |
@sonney2k | which is what the store_model_features() call will do | 01:17 |
n4nd0 | but why does storing the feature vectors will solve the problem? | 01:17 |
PhilTillet | sonney2k, double on alphas make things better but there is still a small error at the end | 01:17 |
@sonney2k | PhilTillet, and with double? | 01:17 |
PhilTillet | Benchmarking cpu... | 01:17 |
PhilTillet | CPU Apply time : 18.7456 | 01:17 |
PhilTillet | Compiling GPU Kernels... | 01:17 |
PhilTillet | Benchmarking GPUs.. | 01:17 |
PhilTillet | GPU Apply time : 2.1955 | 01:17 |
PhilTillet | Total error : 0.000015 | 01:17 |
PhilTillet | with 100k of 1000 | 01:17 |
PhilTillet | and double on alphas | 01:18 |
PhilTillet | don't know where the rounding error takes place | 01:18 |
@sonney2k | PhilTillet, and when you use just 1k instead of 100k and fewer dims? | 01:18 |
@sonney2k | n4nd0, then the SVM is self-contained | 01:19 |
PhilTillet | 1k SV ? | 01:19 |
@sonney2k | yes | 01:19 |
PhilTillet | oh my | 01:19 |
PhilTillet | very bad | 01:19 |
PhilTillet | Total error : 0.006018 | 01:19 |
@sonney2k | n4nd0, it won't store indices relative to anything but the features themselves | 01:19 |
@sonney2k | PhilTillet, that sounds like a bug somewhere | 01:19 |
n4nd0 | sonney2k: ok, I think I am starting to see the point | 01:20 |
@sonney2k | PhilTillet, hopefully in your opencl code :D | 01:20 |
PhilTillet | lol :D | 01:20 |
n4nd0 | sonney2k: I have to think about how to implement it now, since the training occurs in CMulticlassMachine and bot CLinearMulticlassMachine and CKernelMulticlassMachine derive | 01:21 |
PhilTillet | when there's an error it's always on the last digit | 01:21 |
PhilTillet | out[3]=7.026104 | 01:21 |
PhilTillet | out_gpu[3]=7.026103 | 01:21 |
n4nd0 | sonney2k: but we just want to this for CKernelMulticlassMachine | 01:21 |
PhilTillet | and no error for out[0] to out[2] | 01:21 |
PhilTillet | well | 01:22 |
PhilTillet | total_diff was a float | 01:22 |
@sonney2k | PhilTillet, bah | 01:22 |
@sonney2k | n4nd0, just do: set_store_model_features(true); | 01:23 |
@sonney2k | then it will do that for you :) | 01:23 |
@sonney2k | no need to do any extra magic | 01:24 |
@sonney2k | n4nd0, btw one problem later might be that there is already a subset set | 01:24 |
@sonney2k | n4nd0, so one would need nested subsets %-) | 01:24 |
n4nd0 | sonney2k: but isn't it enought to remove the subsets for every machine that is trained? | 01:25 |
n4nd0 | once the machine is trained, it removes the subsets | 01:25 |
@sonney2k | n4nd0, that is what set_store_model_features(true); will automagically do | 01:26 |
@sonney2k | + store the features | 01:26 |
@sonney2k | that will be expensive for one-vs-rest though | 01:26 |
@sonney2k | for that it is much better to just store indices | 01:27 |
PhilTillet | sonney2k, seems like the problem is a bit more complicated | 01:27 |
n4nd0 | sonney2k: one vs rest doesn't need subsets I think | 01:27 |
PhilTillet | for NUM 15 and DIM 3 : | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 0 1.64044e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 1 4.05671e-08 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 2 1.7945e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 3 2.29995e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 4 1.46e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 5 3.52479e-08 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 6 2.49558e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 7 6.59237e-08 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 8 2.7401e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 9 1.81766e-08 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 10 1.48443e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 11 3.06445e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 12 1.21206e-07 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 13 6.44709e-08 | 01:27 |
PhilTillet | Difference between GPU and CPU at pos 14 2.98222e-08 | 01:27 |
PhilTillet | Total error : 0.000002 | 01:27 |
PhilTillet | (sorry for the spam :D) | 01:28 |
@sonney2k | n4nd0, true but then you should only set store module features to true for one-vs-one | 01:28 |
@sonney2k | PhilTillet, float again? | 01:28 |
PhilTillet | no, double | 01:28 |
@sonney2k | that's too much :( | 01:28 |
PhilTillet | ah wait | 01:28 |
PhilTillet | there are some other differences | 01:28 |
PhilTillet | const float64_t rbf_width | 01:28 |
@sonney2k | yes? | 01:29 |
PhilTillet | i used float not double for the width | 01:29 |
n4nd0 | sonney2k: if I have understood correctly, where it makes more sense would in line 185 here | 01:29 |
@sonney2k | PhilTillet, well I guess you have to debug step by step | 01:29 |
n4nd0 | sonney2k: https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/machine/MulticlassMachine.cpp | 01:29 |
@sonney2k | first check if the kernel is the same | 01:29 |
n4nd0 | sonney2k: right after the training | 01:29 |
PhilTillet | will be super fun | 01:29 |
PhilTillet | lol | 01:29 |
@sonney2k | then alpha*k(x,y) | 01:29 |
@sonney2k | well just for one element | 01:30 |
@sonney2k | should be easy | 01:30 |
@sonney2k | n4nd0, yes exactly - try it and report if it works! | 01:31 |
-!- flxb [~cronor@g225030062.adsl.alicedsl.de] has quit [Quit: flxb] | 01:42 | |
PhilTillet | sonney2k, good news | 01:46 |
PhilTillet | sort of | 01:46 |
PhilTillet | :p | 01:46 |
PhilTillet | i am using double everywhere | 01:46 |
PhilTillet | the error is about 1e-14 per example | 01:47 |
PhilTillet | don't think I can get lower | 01:47 |
PhilTillet | but now the improvement on the GPU is no longer *9 | 01:48 |
PhilTillet | but more like *3 | 01:48 |
PhilTillet | :p | 01:48 |
PhilTillet | had to reduce dim to 100 | 01:51 |
PhilTillet | (else, does not fit in memory | 01:51 |
@sonney2k | n4nd0, did it work? | 01:51 |
@sonney2k | PhilTillet, heh | 01:51 |
PhilTillet | Benchmarking cpu... | 01:51 |
PhilTillet | CPU Apply time : 3.11451 | 01:51 |
PhilTillet | Compiling GPU Kernels... | 01:51 |
PhilTillet | Benchmarking GPUs.. | 01:51 |
PhilTillet | GPU Apply time : 0.64685 | 01:51 |
PhilTillet | Total error : 0.000000 | 01:51 |
PhilTillet | for 100k examples of dim 100 | 01:51 |
PhilTillet | and 1000 SVs | 01:51 |
n4nd0 | sonney2k: not yet, idk why but now apply for one-vs-one seems not to output anything | 01:51 |
@sonney2k | PhilTillet, so things seem OK - increasing precision reduces the error. I would have hoped for 1e-16 or 15 but ok | 01:53 |
@sonney2k | might be depending on width, summing order etc | 01:53 |
PhilTillet | yes | 01:53 |
PhilTillet | but seems like double precision really reduces performances :p | 01:53 |
@sonney2k | yeah | 01:53 |
PhilTillet | seems like graphic cards are still slower than cpus at doing double operations | 01:54 |
@sonney2k | but not soo much on CPU | 01:54 |
PhilTillet | yes | 01:54 |
PhilTillet | on my mobile GPU things are even worse | 01:54 |
@sonney2k | I mean also CPU benefits from using floats | 01:54 |
PhilTillet | yes but the difference between float and double on CPU does not seem so significant | 01:54 |
PhilTillet | i mean there is no real performance drop down | 01:54 |
@sonney2k | there should be | 01:55 |
@sonney2k | we should be measuring memory speed | 01:55 |
@sonney2k | so close to factor 2 speedup?! | 01:55 |
PhilTillet | on my mobile GPU yes | 01:56 |
PhilTillet | on high end GPU and high end CPU, more like factor 4 | 01:56 |
@sonney2k | also on CPU | 01:56 |
PhilTillet | oh you mean for float compared to double | 01:56 |
PhilTillet | but another point is that I changed everything to double in GPU | 01:57 |
PhilTillet | even temporaries etc | 01:57 |
PhilTillet | on CPU, some temporaries might be float | 01:57 |
@sonney2k | PhilTillet, ahh ok it is using double internally - just not for kerne | 01:57 |
@sonney2k | that is why | 01:58 |
PhilTillet | oh ok | 01:58 |
PhilTillet | :p | 01:58 |
@sonney2k | n4nd0, uhh | 01:58 |
PhilTillet | so i should not use double everywhere in kernel too | 01:58 |
PhilTillet | that explains the difference error too then I guess | 01:58 |
@sonney2k | ? | 01:58 |
PhilTillet | i don't get it | 01:58 |
PhilTillet | where does shogun use floating point temporaries? | 01:58 |
PhilTillet | in kernels? | 01:59 |
@sonney2k | PhilTillet, I meant that even when you use float on CPU it will use double for most compuations | 01:59 |
@sonney2k | except for kernel | 01:59 |
PhilTillet | oh I see | 01:59 |
@sonney2k | PhilTillet, shogun never uses float32_t | 01:59 |
PhilTillet | is the implementation multithreaded for CPUs? | 01:59 |
@sonney2k | as temporaries | 01:59 |
@sonney2k | yes multithreaded | 01:59 |
PhilTillet | okay | 01:59 |
@sonney2k | so load should be #cpus | 02:00 |
PhilTillet | so *4 sounds realistic | 02:00 |
PhilTillet | *9 was a bit unrealistic :p | 02:00 |
@sonney2k | anyway most expensive is kernel compuation | 02:00 |
PhilTillet | yes | 02:00 |
PhilTillet | same on GPU | 02:00 |
PhilTillet | but it is also where you can get most GFlops with caching | 02:00 |
n4nd0 | sonney2k: idk why yet, but doing m_machine->set_store_model_features(true) in MulticlassMachine.cpp:184 makes apply return nothing :S | 02:01 |
@sonney2k | PhilTillet, maybe one can be more clever and compute k(x_i, x) for all x first and then go to x_j | 02:01 |
PhilTillet | well, on cpu you mean? | 02:01 |
@sonney2k | ahh but in your case examples fit in memory... | 02:01 |
@sonney2k | yeah | 02:02 |
PhilTillet | yes | 02:02 |
PhilTillet | on GPU it is somewhat treatedas a matrix-matrix product | 02:02 |
PhilTillet | but where operations are not inner product but norm_2 | 02:02 |
PhilTillet | XD | 02:02 |
PhilTillet | exp(-norm2/width) actually | 02:02 |
@sonney2k | n4nd0, great :( | 02:02 |
@sonney2k | n4nd0, I guess you need to debug what is going on (print out number of SVs / and values of alphas or so) | 02:03 |
PhilTillet | could maybe get a little increase by mapping features and stuff to texture cache, but it is not something I am used to do and I am not even sure about the gain that could be got | 02:03 |
@sonney2k | n4nd0, if that doesn't work ask on the mailinglist - heiko wrote that code so he can help | 02:03 |
n4nd0 | sonney2k: ok, thank you - I am going to try to see now why does that happen, why it returns nothing in this case | 02:04 |
@sonney2k | PhilTillet, well you should make sure all x_i are in GPU mem | 02:04 |
@sonney2k | n4nd0, ok | 02:04 |
n4nd0 | sonney2k: in any case, what was the other possibility of doing one-vs-one without subsets? | 02:04 |
@sonney2k | for x ... not so important | 02:04 |
@sonney2k | n4nd0, well craft new features / labels | 02:04 |
@sonney2k | could mean huge overhead I know... | 02:05 |
@sonney2k | anyway I have to sleep now | 02:05 |
@sonney2k | cu | 02:05 |
PhilTillet | cu | 02:05 |
n4nd0 | good night, bye | 02:05 |
n4nd0 | it looks pretty cool your OpenGL stuff | 02:10 |
n4nd0 | that was for you PhilTillet ;) | 02:10 |
PhilTillet | OpenCL you mean? | 02:10 |
PhilTillet | :p | 02:10 |
n4nd0 | c'mon ... isn't kind of late? allow me some mistakes :P | 02:10 |
PhilTillet | haha :D | 02:11 |
PhilTillet | it's true | 02:11 |
PhilTillet | i'm tired | 02:11 |
PhilTillet | but jet lag pwnd me... | 02:11 |
n4nd0 | haha I see | 02:11 |
n4nd0 | you said you were in a conference right? | 02:12 |
PhilTillet | yes | 02:13 |
PhilTillet | :p | 02:13 |
PhilTillet | High Performance Computing conference | 02:13 |
n4nd0 | sounds interesting | 02:14 |
n4nd0 | I hope you learnt lot of new stuff | 02:14 |
PhilTillet | well to be honest | 02:14 |
PhilTillet | i just didn't get most of the stuff | 02:14 |
PhilTillet | :p | 02:14 |
n4nd0 | well ... honesty is a good feature! :P | 02:15 |
PhilTillet | i mean, i was very lucky to get there | 02:16 |
PhilTillet | it was not made for students | 02:16 |
PhilTillet | :p | 02:16 |
PhilTillet | that's also why I didn't get most of the stuffs ^^ | 02:16 |
PhilTillet | but well, was able to give my talk, that was why i was here :p | 02:16 |
n4nd0 | I hope it went good | 02:17 |
PhilTillet | well, more or less, i think it was fine... but you know at the end you're always like "oh i've forgotten to talk about that and that and that" | 02:17 |
n4nd0 | yeah, I know that feeling | 02:19 |
n4nd0 | good night people | 03:06 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 03:07 | |
PhilTillet | good night :) | 03:07 |
-!- vikram360 [~vikram360@117.192.170.195] has quit [Ping timeout: 260 seconds] | 03:58 | |
-!- vikram360 [~vikram360@117.192.177.199] has joined #shogun | 05:13 | |
-!- vikram360 [~vikram360@117.192.177.199] has quit [Ping timeout: 248 seconds] | 05:22 | |
-!- vikram360 [~vikram360@117.192.167.204] has joined #shogun | 05:23 | |
-!- Miggy [~piggy@14.139.82.6] has quit [Ping timeout: 252 seconds] | 05:38 | |
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has left #shogun ["Leaving"] | 06:00 | |
-!- vikram360 [~vikram360@117.192.167.204] has quit [Ping timeout: 244 seconds] | 06:25 | |
-!- av3ngr [av3ngr@nat/redhat/x-kwymmyssrqxjiwle] has joined #shogun | 07:53 | |
-!- harshit_ [~harshit@182.68.160.94] has joined #shogun | 09:16 | |
-!- av3ngr [av3ngr@nat/redhat/x-kwymmyssrqxjiwle] has quit [Quit: That's all folks!] | 09:54 | |
-!- harshit_ [~harshit@182.68.160.94] has quit [Ping timeout: 260 seconds] | 10:27 | |
-!- harshit_ [~harshit@182.68.158.71] has joined #shogun | 10:30 | |
-!- blackburn [~qdrgsm@62.106.114.183] has joined #shogun | 10:39 | |
-!- vikram360 [~vikram360@117.192.188.233] has joined #shogun | 11:04 | |
-!- flxb [~cronor@e177094239.adsl.alicedsl.de] has joined #shogun | 11:05 | |
-!- gsomix [~gsomix@188.168.4.150] has quit [Ping timeout: 248 seconds] | 11:06 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 11:12 | |
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun | 11:17 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Client Quit] | 11:17 | |
-!- flxb_ [~cronor@e177094239.adsl.alicedsl.de] has joined #shogun | 11:18 | |
-!- flxb [~cronor@e177094239.adsl.alicedsl.de] has quit [Ping timeout: 246 seconds] | 11:19 | |
-!- flxb [~cronor@e178183060.adsl.alicedsl.de] has joined #shogun | 11:21 | |
-!- flxb_ [~cronor@e177094239.adsl.alicedsl.de] has quit [Ping timeout: 246 seconds] | 11:22 | |
PhilTillet | hey | 11:35 |
-!- jckrz [~jacek@89-69-164-5.dynamic.chello.pl] has joined #shogun | 11:36 | |
blackburn | hi | 11:39 |
-!- harshit_ [~harshit@182.68.158.71] has quit [Read error: Connection reset by peer] | 11:50 | |
-!- gsomix [~gsomix@83.149.21.216] has joined #shogun | 12:04 | |
PhilTillet | how are you blackburn ? | 12:16 |
blackburn | PhilTillet: fine, what about you? | 12:17 |
-!- Marty28 [~Marty@158.181.76.57] has joined #shogun | 12:21 | |
Marty28 | hiho | 12:21 |
PhilTillet | blackburn, fine :) | 12:30 |
PhilTillet | solved most problems with OpenCL | 12:31 |
PhilTillet | :p | 12:31 |
blackburn | yeah I have seen you were discussing things yesterday | 12:33 |
-!- harshit_ [~harshit@182.68.133.39] has joined #shogun | 12:35 | |
PhilTillet | Still there is a last problem i on precision, I asked Soeren about that | 12:39 |
PhilTillet | for computing norm(x-y) you do norm(x) + norm(y) - 2*inner_prod(x,y) | 12:39 |
PhilTillet | in the gaussian kernel | 12:39 |
PhilTillet | why? :) | 12:39 |
PhilTillet | (I use inner_prod(x-y,x-y) so the round off error is different) | 12:40 |
blackburn | PhilTillet: which is better in means of precision? | 12:41 |
PhilTillet | I have absolutely no idea :D | 12:42 |
PhilTillet | but the two are different | 12:42 |
PhilTillet | the second one might be faster to compute though | 12:42 |
blackburn | PhilTillet: well you may precompute norm things right? | 12:42 |
PhilTillet | what do you mean? | 12:45 |
PhilTillet | I don't precompute anything | 12:46 |
blackburn | I mean dot(x-y,x-y) can be partially precomputed | 12:47 |
PhilTillet | well I kinda do a for loop with "diff = x[i] - y[i] ; sum+=pow(diff,2);" | 12:48 |
blackburn | so what is faster | 12:49 |
blackburn | to compute pow'ed differences | 12:49 |
blackburn | or to compute dot product? | 12:49 |
PhilTillet | to compute pow'ed difference for sure, but from a rounding error point of view I have no idea which is more precise :p | 12:51 |
-!- gsomix [~gsomix@83.149.21.216] has quit [Ping timeout: 246 seconds] | 12:52 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 12:54 | |
n4nd0 | blackburn: hey there! | 13:02 |
Marty28 | Hi! Do you know where I can find some relatively new ***Python*** scripts on computing and visualizing POIMs? (http://www.fml.tuebingen.mpg.de/raetsch/projects/POIM) | 13:04 |
n4nd0 | Marty28: no idea :( | 13:05 |
Marty28 | thx | 13:05 |
Marty28 | I guess I have to ask Gunnar R?tsch. | 13:06 |
flxb | Why is it that in python I have to import KernelRidgeRegression and in the online documentation the class is called KRR? Is the documentation outdated? | 13:42 |
n4nd0 | flxb: is in the online doc for python or for another language? | 13:45 |
n4nd0 | flxb: naming changes among languages | 13:45 |
flxb | n4nd0: i was looking at the c++ classes. where can i find the online doc for python? | 13:46 |
n4nd0 | flxb: I don't think there is doc for python | 13:46 |
n4nd0 | flxb: we only have doxygen doc extracted from C++ source code | 13:47 |
n4nd0 | flxb: but you should be able to use help() in python | 13:47 |
n4nd0 | flxb: and if you have a problem like this, that you do know the name in C++ but not in python, I suggest you to check the file | 13:48 |
n4nd0 | src/interfaces/python_modular | 13:48 |
flxb | n4nd0: i was always curious about that. why is there no documentation on how to use shogun in python? | 13:48 |
n4nd0 | flxb: well, I think that if you issue help you will find quite a bit of info as well | 13:49 |
n4nd0 | but yes, more documentation would be quite useful probably | 13:50 |
n4nd0 | anyway, I think that if you read C++ doc/check the code it is normally enough to get the point of everything | 13:50 |
n4nd0 | and don't forget the examples! | 13:51 |
-!- harshit_ [~harshit@182.68.133.39] has quit [Ping timeout: 246 seconds] | 13:51 | |
flxb | n4nd0: as far as i know the python modular examples never work | 13:55 |
n4nd0 | flxb: that's weird | 13:55 |
n4nd0 | flxb: did you configure with support for python_modular? | 13:55 |
flxb | n4nd0: yes it works fine. but i have to import modshogun, not shogun | 13:56 |
n4nd0 | flxb: I didn't get it, do the examples work on your machine or not? | 13:57 |
flxb | n4nd0: If I update all imports they would work, I guess. | 13:57 |
n4nd0 | flxb: I don't it is normal that you have to do that | 13:58 |
flxb | I don't know if it is just a problem on my machine, but i cannot import shogun. i have to import modshogun. | 13:58 |
n4nd0 | no problem in my machine importing shogun | 13:58 |
n4nd0 | and the examples work fine | 13:59 |
flxb | n4nd0: ah, i think it is because i don't make install. i just compile and add src/interfaces/python_modular to my python path | 14:02 |
n4nd0 | aha | 14:02 |
n4nd0 | yes, try to do sudo make install as well | 14:03 |
-!- harshit_ [~harshit@182.68.133.39] has joined #shogun | 14:18 | |
-!- jckrz [~jacek@89-69-164-5.dynamic.chello.pl] has quit [Quit: Ex-Chat] | 14:30 | |
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 244 seconds] | 14:55 | |
flxb | n4nd0: how do you install when you can't sudo? | 14:56 |
n4nd0 | flxb: can you make install? | 14:57 |
flxb | n4nd0: no it tries to alter permissions on /usr/lib | 15:00 |
n4nd0 | flxb: hmm I don't know then :S | 15:01 |
flxb | n4nd0: this isn't a big problem. i'll just use modshogun | 15:01 |
n4nd0 | flxb: I guess that there must be an option to just install for the user you have permissions | 15:01 |
-!- blackburn_ [50ea622b@gateway/web/freenode/ip.80.234.98.43] has joined #shogun | 15:04 | |
-!- blackburn_ [50ea622b@gateway/web/freenode/ip.80.234.98.43] has quit [Quit: Page closed] | 15:37 | |
-!- gsomix [~gsomix@85.26.232.131] has joined #shogun | 15:38 | |
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun | 15:41 | |
-!- harshit_ [~harshit@182.68.133.39] has quit [Read error: Connection reset by peer] | 15:57 | |
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 244 seconds] | 15:59 | |
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has joined #shogun | 15:59 | |
-!- gsomix [~gsomix@85.26.232.131] has quit [Quit: ????? ? ?? ??? (xchat 2.4.5 ??? ??????)] | 16:02 | |
-!- gsomix [~gsomix@85.26.232.131] has joined #shogun | 16:02 | |
-!- PhilTillet [~Philippe@npasserelle10.minet.net] has quit [Ping timeout: 244 seconds] | 16:07 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 245 seconds] | 16:15 | |
gsomix | hi | 16:18 |
-!- harshit_ [~harshit@182.68.67.61] has joined #shogun | 16:32 | |
-!- gsomix [~gsomix@85.26.232.131] has quit [Ping timeout: 252 seconds] | 16:55 | |
-!- gsomix [~gsomix@85.26.232.131] has joined #shogun | 17:04 | |
Marty28 | void prepare_POIM2 ( float64_t * distrib, | 17:10 |
Marty28 | int32_t num_sym, | 17:10 |
Marty28 | int32_t num_feat | 17:10 |
Marty28 | ) | 17:10 |
Marty28 | Here I need a float64_t * | 17:10 |
Marty28 | numpy.ones((4,4)) gives me a numpy.ndarray | 17:12 |
Marty28 | Can I convert the matrix somehow to fit into the float_64_t? | 17:12 |
Marty28 | I try using the script poim.py from http://people.tuebingen.mpg.de/vipin/www.fml.tuebingen.mpg.de/raetsch//projects/easysvm/ | 17:13 |
Marty28 | I get the ERROR: "TypeError: in method 'WeightedDegreePositionStringKernel_prepare_POIM2', argument 2 of type 'float64_t *'" | 17:13 |
-!- flxb [~cronor@e178183060.adsl.alicedsl.de] has quit [Read error: Connection reset by peer] | 17:19 | |
-!- flxb [~cronor@e178183060.adsl.alicedsl.de] has joined #shogun | 17:30 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 17:33 | |
-!- romovpa [bc2c2ad0@gateway/web/freenode/ip.188.44.42.208] has joined #shogun | 17:54 | |
-!- romovpa_ [bc2c2ad0@gateway/web/freenode/ip.188.44.42.208] has joined #shogun | 17:59 | |
-!- romovpa [bc2c2ad0@gateway/web/freenode/ip.188.44.42.208] has quit [Ping timeout: 245 seconds] | 18:00 | |
gsomix | sonney2k, hey | 18:02 |
-!- Marty28 [~Marty@158.181.76.57] has quit [Quit: ChatZilla 0.9.88.1 [Firefox 11.0/20120310010446]] | 18:02 | |
-!- romovpa_ [bc2c2ad0@gateway/web/freenode/ip.188.44.42.208] has quit [Client Quit] | 18:03 | |
-!- harshit_ [~harshit@182.68.67.61] has quit [Ping timeout: 260 seconds] | 18:05 | |
-!- Marty28 [~Marty@158.181.76.57] has joined #shogun | 18:35 | |
-!- romi_ [~mizobe@187.74.1.223] has quit [Ping timeout: 246 seconds] | 18:37 | |
-!- romi_ [~mizobe@187.74.1.223] has joined #shogun | 18:38 | |
-!- gsomix [~gsomix@85.26.232.131] has quit [Ping timeout: 260 seconds] | 18:42 | |
-!- gsomix [~gsomix@85.26.234.192] has joined #shogun | 18:43 | |
-!- romi_ [~mizobe@187.74.1.223] has quit [Ping timeout: 246 seconds] | 18:47 | |
-!- nickon [d5774105@gateway/web/freenode/ip.213.119.65.5] has joined #shogun | 19:09 | |
nickon | good evening, everyone (for me at least) | 19:10 |
nickon | :) | 19:10 |
-!- romi_ [~mizobe@187.74.1.223] has joined #shogun | 19:12 | |
-!- harshit_ [~harshit@182.68.67.61] has joined #shogun | 19:13 | |
n4nd0 | nickon: hey! | 19:19 |
nickon | hey n4nd0 :) | 19:21 |
nickon | how are you ? | 19:21 |
nickon | are you a developer at the shogun organisation ? | 19:21 |
n4nd0 | nickon: I have done some contributions to the project | 19:21 |
nickon | what did you work on then ? :) | 19:22 |
n4nd0 | nickon: and I am fine thanks, what about you? | 19:22 |
nickon | I'm fine too, thanx :) | 19:22 |
n4nd0 | nickon: small things here and there :) | 19:23 |
n4nd0 | nickon: https://github.com/shogun-toolbox/shogun/commit/afb8b5eb7058640949cc7774d09b8756a66a5ada you can check them here if interested | 19:23 |
nickon | just nice to get an idea on who is who and who is working on what :) | 19:23 |
n4nd0 | yeah sure | 19:24 |
n4nd0 | now I am working on a dimensionality reduction algo | 19:24 |
nickon | what kind of method are you implementing ? | 19:25 |
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has joined #shogun | 19:25 | |
nickon | my thesis is somehow related to dimensionality reduction (tensor compression, so not exactly in the context of machine learning ;) | 19:26 |
n4nd0 | Stochastic Proximity Embedding | 19:26 |
n4nd0 | aham, are you working on it now? | 19:27 |
nickon | interesting | 19:27 |
nickon | well, not right now ;) | 19:27 |
nickon | but yes, during this academic year I've been busy on that | 19:27 |
n4nd0 | ok | 19:28 |
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has quit [Client Quit] | 19:30 | |
nickon | At the moment I'm actually interested in what shogun offers for GSoC (as I'm still a student :)) | 19:30 |
nickon | the C5.0 integration project seems to get my attention, can't really explain why | 19:30 |
-!- harshit_ [~harshit@182.68.67.61] has quit [Ping timeout: 260 seconds] | 19:31 | |
n4nd0 | ok, nice | 19:34 |
nickon | Where are you from actually, if I may ask? | 19:34 |
-!- PhilTillet [~Philippe@tillet-p42154.maisel.int-evry.fr] has joined #shogun | 19:35 | |
-!- harshit_ [~harshit@182.68.67.61] has joined #shogun | 19:37 | |
gsomix | nickon, hi. if you are interested to know, I'm working on python3 interface and covertree integration now. | 19:37 |
-!- harshit_ [~harshit@182.68.67.61] has quit [Client Quit] | 19:37 | |
n4nd0 | I come from Spain, what about you? | 19:37 |
-!- harshit__ [~harshit@182.68.67.61] has joined #shogun | 19:38 | |
-!- harshit__ [~harshit@182.68.67.61] has quit [Client Quit] | 19:38 | |
-!- harshit_ [~harshit@182.68.67.61] has joined #shogun | 19:38 | |
harshit_ | hey n4nd0.que tal ! | 19:42 |
nickon | I'm from Belgium :) | 19:45 |
nickon | sorry I gotta go eat! | 19:45 |
nickon | see ya | 19:45 |
n4nd0 | bye | 19:46 |
n4nd0 | harshit_: buenas! bien y t??? :D | 19:46 |
harshit_ | n4nd0:good :) | 19:48 |
harshit_ | hey do you knw what does SG_ADD() does exactly ? | 19:50 |
n4nd0 | harshit_: let me check | 19:52 |
n4nd0 | mmm not exactly | 19:52 |
n4nd0 | I remember S?ren told me that m_parameters->add is used for things like serialization | 19:53 |
n4nd0 | but Heiko said that SG_ADD must be done in order to be able to use model selection | 19:53 |
n4nd0 | so I guess it is a kind of method to save some info about the class members | 19:54 |
n4nd0 | but actually no idea what it exactly does | 19:54 |
-!- romovpa [b064f6fe@gateway/web/freenode/ip.176.100.246.254] has joined #shogun | 19:59 | |
harshit_ | got it .. | 20:03 |
harshit_ | thanks :) | 20:04 |
romovpa | Hi all. I've found that saving the model isn't implemented for LinearMachine and many other machines. I would like to implement the missing save/load methods. Is it a good idea? | 20:15 |
blackburn | romovpa: do you mean serialization? | 20:18 |
blackburn | romovpa: btw do you know vojtech franc? | 20:18 |
blackburn | harshit_: SG_ADD is m_parameters->add | 20:18 |
romovpa | yes, I mean | 20:18 |
blackburn | n4nd0: did you solve your issue? | 20:19 |
blackburn | romovpa: no, that's not needed it works already | 20:19 |
romovpa | blackburn: (about Vojtech Franc) no, I have wrote him, but haven't receive any answer | 20:20 |
blackburn | aha I see | 20:20 |
romovpa | blackburn: hmm... why I couldn't save my liblinear model through cmdline interface? | 20:21 |
blackburn | oh you use this stuff.. | 20:21 |
romovpa | blackburn: =) | 20:21 |
blackburn | why cmdline? | 20:21 |
blackburn | anyway let me check how it works | 20:22 |
n4nd0 | blackburn: hey! you talking about spe or o-vs-o? | 20:23 |
blackburn | n4nd0: both probably | 20:23 |
n4nd0 | blackburn: ah, for o-vs-o I am waiting for Heiko's answer, spe is still on progress :) | 20:23 |
blackburn | ok I see | 20:24 |
blackburn | feel free to ask if you need help | 20:24 |
n4nd0 | thank you :) | 20:24 |
n4nd0 | blackburn: a theoretical thing, embedding == DR method? | 20:27 |
blackburn | n4nd0: embedding is a process of R^n -> R^t so yes | 20:28 |
blackburn | t<n | 20:28 |
n4nd0 | blackburn: aham, I understand that they are synonyms then | 20:28 |
Marty28 | python3? nice | 20:31 |
blackburn | romovpa: ok that makes sense probably but my advice is to use modular interface | 20:32 |
romovpa | blackburn: Could you explain me, what is the right way of saving trained model? (Using libshogun and c++) Are CMachine.save()/load() methods deprecated? | 20:32 |
blackburn | use save_serializable | 20:33 |
blackburn | romovpa: actually I do not know whether it should work right in static right now | 20:34 |
blackburn | but anyway save_serializable does the job | 20:34 |
romovpa | blackburn: I'm trying to use many interfaces and libshogun directly, thats ok =) | 20:35 |
blackburn | romovpa: main issue of static is we have to update it *manually* | 20:36 |
romovpa | ok, thanks | 20:36 |
blackburn | each time we add new feature we actually should update it | 20:36 |
blackburn | but actually we don't | 20:36 |
blackburn | so it becomes outdated day-by-day hah | 20:36 |
romovpa | blackburn: documentation seems to have a similar problem | 20:45 |
blackburn | oh yes we hear this thing everyday | 20:46 |
harshit_ | blackburn: I have few questions regarding the liblinear/ocas and lbp features project .. can you help me out ? | 20:57 |
-!- romovpa [b064f6fe@gateway/web/freenode/ip.176.100.246.254] has quit [Ping timeout: 245 seconds] | 21:00 | |
blackburn | harshit_: yes | 21:06 |
blackburn | one warning - I believe it is not really actual | 21:06 |
blackburn | liblinear and ocas are up-to-date | 21:06 |
-!- harshit_ [~harshit@182.68.67.61] has quit [Ping timeout: 272 seconds] | 21:08 | |
-!- nickon [d5774105@gateway/web/freenode/ip.213.119.65.5] has quit [Quit: Page closed] | 21:18 | |
-!- harshit_ [~harshit@182.68.67.61] has joined #shogun | 21:18 | |
harshit_ | Exactly | 21:19 |
harshit_ | blackburn: only thing i noticed that is not in shogun is the multi class svm | 21:19 |
harshit_ | of liblinear | 21:19 |
blackburn | yes it is in | 21:19 |
harshit_ | you mean multiclass svm of liblinear, is now in shogun .? | 21:21 |
blackburn | yes, MulticlassLiblinear | 21:21 |
harshit_ | but in liblinear.cpp, Under MCSVM_CS case nothing is there except SG_NOTIMPEMENTED | 21:23 |
blackburn | you probably have outdated shogun | 21:24 |
harshit_ | http://www.shogun-toolbox.org/doc/en/current/LibLinear_8cpp_source.html#l00078 | 21:24 |
harshit_ | oh | 21:24 |
harshit_ | actually i was checking out in documentation | 21:24 |
blackburn | source on site is not up-to-date for sure | 21:25 |
harshit_ | ohk, So in the project i was considering for gsoc, Actually nothing is left in it except lbp features | 21:26 |
blackburn | yeah probably :) | 21:26 |
blackburn | I believe it is not a problem | 21:27 |
blackburn | there is a week still and nobody wants you to start with project before proposals deadline | 21:27 |
blackburn | e.g. n4nd0 contributes with a variety of patches and it is actually ok | 21:28 |
harshit_ | ok , so could you tell me about decision trees | 21:31 |
harshit_ | I mean I remember you saying that | 21:31 |
harshit_ | its quite hard to implement | 21:31 |
harshit_ | why so ? | 21:31 |
blackburn | no, not really | 21:31 |
blackburn | I think summer would be enough to adapt it for shogun | 21:31 |
harshit_ | so C5.0 + lbp features will make a nice project ! | 21:33 |
blackburn | makes sense | 21:33 |
harshit_ | and regarding lbp features : i think the implementation of openCV is quite nice | 21:33 |
blackburn | that needs some discussion | 21:33 |
harshit_ | but i wonder why is the liblinear/ocas project is in ideas list when it is updated | 21:34 |
blackburn | actually I believe opencv features would be better | 21:34 |
blackburn | yes probably this one should be removed | 21:35 |
harshit_ | I think liblinear is going to release there regression feature this summer, So it might be bcoz of that it is there . | 21:36 |
harshit_ | any thoughts on that ? | 21:37 |
blackburn | I have no idea actually | 21:37 |
harshit_ | Actually i am not able to chat with sonney2k, Time lag :( | 21:39 |
harshit_ | so I am having problem in refining my project | 21:39 |
blackburn | if you are sure liblinear will release you may add this to project | 21:40 |
-!- romi_ [~mizobe@187.74.1.223] has quit [Ping timeout: 246 seconds] | 21:40 | |
harshit_ | I think soeren is involved with liblinear too, So he will probably have a better idea abt release ! | 21:42 |
blackburn | no actually he is not involved with liblinear | 21:50 |
blackburn | so what are ideas you want to apply for? | 21:50 |
n4nd0 | blackburn: so one thing about SPE and using KNN with cover tree | 21:52 |
blackburn | aha shoot | 21:52 |
-!- romi_ [~mizobe@187.74.1.223] has joined #shogun | 21:52 | |
n4nd0 | blackburn: so I see that I have to use the same that is in LLE, as you told me | 21:53 |
blackburn | you don't have to but it has best performance | 21:53 |
n4nd0 | blackburn: but to copy the code there directly is not a very nice solution | 21:53 |
n4nd0 | blackburn: or is it ok like that? | 21:53 |
blackburn | why? | 21:53 |
blackburn | hmm you may inherit from isomap | 21:54 |
blackburn | but now really better | 21:54 |
n4nd0 | not to have the same code in several places ... | 21:54 |
harshit_ | blackburn: Actually saw his name in liblinear paper, | 21:54 |
blackburn | harshit_: are you sure? | 21:54 |
blackburn | n4nd0: ok let me describe my plan on libedrt | 21:54 |
n4nd0 | blackburn: ok | 21:54 |
blackburn | actually I plan to extract things into half-standalone libedrt | 21:54 |
blackburn | so it is ok, I will just reuse one function in solid library | 21:55 |
blackburn | it would take a while to adapt your code | 21:55 |
harshit_ | yeah , check this out :http://www.csie.ntu.edu.tw/~cjlin/papers/liblinear.pdf | 21:55 |
n4nd0 | blackburn: ok, I will just commetn that is taken from there | 21:55 |
n4nd0 | blackburn: c'mon, I will try to do it good so it is not that much to adapt :P | 21:55 |
harshit_ | and saw his name in libocas too :) | 21:56 |
blackburn | harshit_: he was action editor there | 21:56 |
blackburn | no involvement with project itself | 21:56 |
blackburn | he is an author of libocas | 21:56 |
blackburn | as well as v. franc | 21:56 |
harshit_ | oh gr8 | 21:57 |
blackburn | n4nd0: would require some adaptations anyway | 21:57 |
blackburn | just copy the code | 21:57 |
harshit_ | blackburn : I think C5.0 and lbp features will make a good project for me in gsoc, but the problem is that i dont have much knowledge about c5.0 | 22:02 |
blackburn | n4nd0: that code is duplicated in isomap already | 22:03 |
harshit_ | except some knowledge about decision trees and pruning in general | 22:03 |
n4nd0 | blackburn: ok | 22:03 |
n4nd0 | harshit_: in my opinion, you shouldn't probably be scared about that :) I'm sure it's sth you are able to pick up and implement | 22:04 |
harshit_ | thanks for boosting my confidence n4nd0, I think finally i'll go with that project only but just need some research before that | 22:06 |
n4nd0 | :) | 22:07 |
blackburn | n4nd0: actually I would suggest you to work on libedrt directly if it was already in shogun | 22:08 |
blackburn | but Soeren didn't like it and it is not ready so it is in separate branch yet | 22:09 |
n4nd0 | blackburn: I saw your commit proof of concept, but you didn't like it that much at the end, did you? | 22:10 |
n4nd0 | oh yeah, I remember whe you guys talked about that | 22:11 |
blackburn | n4nd0: yes but I still think I should get this done | 22:11 |
harshit_ | hey , Is there any reference on how cross validation is implemented in shogun . | 22:13 |
blackburn | I am afraid Heiko is the only reference for that :D | 22:14 |
harshit_ | okay, btw whats the nickname of heiko here ? | 22:15 |
blackburn | 'heiko' as his name usually | 22:16 |
n4nd0 | karlnapf too? | 22:16 |
blackburn | ah yes | 22:17 |
blackburn | I got wrong probably | 22:17 |
harshit_ | havnt seen him often ! | 22:17 |
blackburn | karlnapf is more usual | 22:17 |
harshit_ | blackburn : there is a project named " Integrate SGD-QN: in ideas of 2011,that actually seems to be easy . should I work on it for now . | 22:19 |
harshit_ | as in, before gsoc | 22:20 |
-!- blackburn1 [~qdrgsm@31.28.34.168] has joined #shogun | 22:22 | |
-!- blackburn [~qdrgsm@62.106.114.183] has quit [Ping timeout: 246 seconds] | 22:23 | |
-!- blackburn1 is now known as blackburn | 22:24 | |
harshit_ | It is in shogun :( | 22:25 |
blackburn | what is in shogun? | 22:26 |
harshit_ | SGD-QN | 22:27 |
harshit_ | i was just going through 2011 ideas list, to get something cool to work on now | 22:28 |
blackburn | ah yes | 22:28 |
-!- vikram360 [~vikram360@117.192.188.233] has quit [Ping timeout: 276 seconds] | 22:32 | |
-!- vikram360 [~vikram360@117.192.168.95] has joined #shogun | 22:33 | |
n4nd0 | blackburn: this macro that is used around in DR methods | 22:51 |
n4nd0 | SG_DEBUG | 22:51 |
n4nd0 | I guess there must be an ifdef around there to activate/desactivate it right | 22:51 |
n4nd0 | ? | 22:51 |
blackburn | n4nd0: no, why? | 22:51 |
n4nd0 | blackburn: oh, that's just what I expected | 22:52 |
n4nd0 | blackburn: so it is just like SG_PRINT, it will always appear on screen? | 22:52 |
blackburn | it prints nothing if loglevel is above than MSG_DEBUG | 22:52 |
n4nd0 | ok | 22:53 |
-!- harshit_ [~harshit@182.68.67.61] has quit [Remote host closed the connection] | 23:10 | |
-!- romovpa_ [b064f6fe@gateway/web/freenode/ip.176.100.246.254] has joined #shogun | 23:44 | |
-!- gsomix [~gsomix@85.26.234.192] has quit [Quit: ????? ? ?? ??? (xchat 2.4.5 ??? ??????)] | 23:49 | |
-!- gsomix [~gsomix@85.26.234.192] has joined #shogun | 23:49 | |
romovpa_ | could somebody explain to me what stands for an abbreviation PLIF? | 23:51 |
blackburn | romovpa_: piecewise linear function | 23:51 |
-!- Vuvu [~Vivan_Ric@115.248.130.148] has joined #shogun | 23:54 | |
-!- blackburn [~qdrgsm@31.28.34.168] has quit [Ping timeout: 246 seconds] | 23:59 | |
--- Log closed Sun Apr 01 00:00:19 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!