--- Log opened Fri Feb 10 00:00:19 2012 | ||
n4nd0 | I am getting lot of compile errors to make shogun with cplex | 00:00 |
---|---|---|
n4nd0 | not just in the examples but also in more important files such as mathematics/Cplex | 00:01 |
n4nd0 | is that normal? | 00:01 |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Remote host closed the connection] | 00:05 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 00:05 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 00:19 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 00:19 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 02:45 | |
-!- dfrx [~f-x@inet-hqmc07-o.oracle.com] has joined #shogun | 05:27 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 240 seconds] | 07:55 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun | 07:55 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 07:59 | |
-!- CIA-11 [~CIA@cia.atheme.org] has quit [Ping timeout: 416 seconds] | 08:16 | |
-!- CIA-18 [~CIA@cia.atheme.org] has joined #shogun | 08:16 | |
-!- shogun-buildbot_ [~shogun-bu@7nn.de] has joined #shogun | 08:21 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 252 seconds] | 08:26 | |
-!- wiking [~wiking@78-22-115-59.access.telenet.be] has joined #shogun | 08:35 | |
-!- wiking [~wiking@78-22-115-59.access.telenet.be] has quit [Changing host] | 08:35 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 08:35 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 08:45 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 244 seconds] | 09:06 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 09:06 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 09:19 | |
-!- Netsplit *.net <-> *.split quits: naywhayare | 09:29 | |
-!- Netsplit over, joins: naywhayare | 09:32 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 265 seconds] | 09:37 | |
-!- Netsplit *.net <-> *.split quits: @sonney2k, CIA-18, shogun-buildbot_, naywhayare, wiking, dfrx | 09:55 | |
-!- Netsplit over, joins: naywhayare, shogun-buildbot_, CIA-18, dfrx, @sonney2k | 09:55 | |
-!- naywhaya1e [~ryan@spoon.lugatgt.org] has joined #shogun | 09:59 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 10:01 | |
-!- naywhayare [~ryan@spoon.lugatgt.org] has quit [Ping timeout: 252 seconds] | 10:02 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 10:48 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 10:48 | |
-!- n4nd0 [~n4nd0@n145-p102.kthopen.kth.se] has joined #shogun | 10:50 | |
-!- dfrx [~f-x@inet-hqmc07-o.oracle.com] has quit [Quit: Leaving.] | 12:41 | |
-!- n4nd0 [~n4nd0@n145-p102.kthopen.kth.se] has quit [Quit: Leaving] | 12:51 | |
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun | 15:56 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 276 seconds] | 16:00 | |
-!- wiking_ is now known as wiking | 16:00 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 16:02 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 16:02 | |
CIA-18 | shogun: Soeren Sonnenburg master * rdb752f7 / (6 files in 2 dirs): | 17:27 |
CIA-18 | shogun: Merge pull request #368 from vigsterkr/master | 17:27 |
CIA-18 | shogun: Add Jensen-Shannon kernel - http://git.io/hOxavQ | 17:27 |
CIA-18 | shogun: Viktor Gal master * re0a1155 / (6 files in 2 dirs): | 17:27 |
CIA-18 | shogun: Add Jensen-Shanon kernel This patch adds a CDotKernel based Jensen-Shannon | 17:27 |
CIA-18 | shogun: kernel to shogun-toolbox. Sources of modular interface has been changed so it | 17:27 |
CIA-18 | shogun: can be used via the modular interfaces as well. - http://git.io/z196rQ | 17:27 |
CIA-18 | shogun: Viktor Gal master * r51fcf29 / src/shogun/kernel/JensenShannonKernel.cpp : Change to CMath::log2 in JensenShannonKernel - http://git.io/MMk_jQ | 17:27 |
wiking | yeey my first patch applied to HEAD \o/ | 17:35 |
-!- naywhaya1e is now known as naywhayare | 18:00 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 19:12 | |
n4nd0 | hi there | 19:13 |
n4nd0 | can someone help me with cplex installation? | 19:14 |
n4nd0 | I have found quite a bit of compile errors | 19:14 |
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun | 20:04 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 245 seconds] | 20:05 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 252 seconds] | 20:08 | |
-!- wiking_ is now known as wiking | 20:08 | |
-!- blackburn [~qdrgsm@83.234.54.44] has joined #shogun | 20:14 | |
blackburn | wiking: is this J-S kernel better on histograms? | 20:15 |
wiking | blackburn: for my data set definitely | 20:15 |
blackburn | what is your dataset? | 20:16 |
wiking | there's a +4% on average accuracy | 20:16 |
blackburn | better than HIK? | 20:16 |
wiking | but then again it's quite slower as well :P | 20:16 |
wiking | yep better than HIK | 20:16 |
blackburn | huh | 20:16 |
blackburn | ! | 20:16 |
blackburn | interesting | 20:16 |
wiking | but as said i think it very much depends on your dataset as well | 20:17 |
wiking | as always though :P | 20:17 |
wiking | but i have a simple bag of visual words setup | 20:18 |
wiking | and if i l2-norm the tf vectors | 20:18 |
wiking | js is better than hik | 20:18 |
blackburn | I've used HOG with HIK, was better than linear of course | 20:18 |
blackburn | but I didn't know anything about js kernel | 20:18 |
wiking | and about the train/infer speed... it could be easily make it in a way to use linear classification | 20:19 |
wiking | http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCYQFjAA&url=http%3A%2F%2Fwww.vlfeat.org%2F~vedaldi%2Fassets%2Fpubs%2Fvedaldi11efficient.pdf&ei=uG01T6qzJcui-gbWwfSSAg&usg=AFQjCNEOGMdgVs0gaPgm60i76bNakkyicQ&sig2=TAz11bIm2OAogWtcp8fACQ | 20:19 |
wiking | but it requires some extra coding for shogun... | 20:19 |
blackburn | btw are you a student? | 20:19 |
wiking | ye phd | 20:19 |
wiking | so the title is: Efficient Additive Kernels via Explicit Feature Maps | 20:19 |
blackburn | yeah seen that | 20:19 |
wiking | part of vlfeat project... | 20:20 |
wiking | so there's already code of course | 20:20 |
wiking | i mean i was already surprised about HIK performance... | 20:20 |
blackburn | I'm just curious whether you are going to apply as gsoc student :) | 20:20 |
wiking | hahahahaha | 20:20 |
wiking | i don't know | 20:20 |
wiking | did once a GSoC project | 20:21 |
wiking | was fun | 20:21 |
wiking | but last year was weird | 20:21 |
blackburn | which one? | 20:21 |
wiking | i've applied to a gsoc project in opencv proj... and didn't got it... and on the end the whole gsoc project just failed; it seems the guy who got it just went off... | 20:21 |
wiking | so it's a shame | 20:22 |
wiking | for shogun it'd be great to add some new machines as part of gsoc project | 20:22 |
blackburn | so you wasn't a gsoc student? | 20:22 |
blackburn | weren't sorry :D | 20:22 |
wiking | last year not | 20:22 |
wiking | i was in 2009 | 20:22 |
blackburn | oh I see | 20:22 |
blackburn | I was shogun gsoc student last year | 20:23 |
wiking | what have you've done? | 20:24 |
blackburn | dim reduction | 20:24 |
wiking | ah cool, the k(pac) part? | 20:24 |
wiking | *pca | 20:24 |
wiking | so i've ment (k)pca | 20:24 |
wiking | :P | 20:24 |
blackburn | mainly LLE and similar once | 20:24 |
blackburn | ones | 20:24 |
blackburn | :D | 20:24 |
wiking | :> | 20:24 |
blackburn | http://shogun-toolbox.org/edrt/ | 20:25 |
wiking | i've seen on last year's project ideas that there was an interest in doing latent s-svm | 20:25 |
wiking | oh fuck | 20:25 |
wiking | niice | 20:25 |
blackburn | recently I've submitted a paper on that thing | 20:26 |
wiking | isn't it merged into shogun master branch? | 20:26 |
blackburn | well merged.. :) | 20:26 |
wiking | on which thing? | 20:26 |
wiking | you mean on your gsoc proj? | 20:26 |
blackburn | not really, I called it toolkit | 20:26 |
blackburn | and said it is integrated to shogun | 20:27 |
blackburn | http://dl.dropbox.com/u/10139213/shogun/lisitsyn12a.pdf | 20:27 |
wiking | ah cool | 20:28 |
wiking | i'll go through it | 20:28 |
blackburn | nothing interesting :) | 20:28 |
wiking | heheh | 20:29 |
wiking | niiice it's a jmlr article | 20:29 |
wiking | i'm working now on a ieee transaction on multimedia paper... and will try to submit it to pami | 20:29 |
wiking | but my hopes aren't so ... high :P | 20:30 |
blackburn | it is my first paper _ever_ so I hope it would be accepted :) | 20:30 |
wiking | hihi | 20:30 |
wiking | today i should get as well a notification for a paper of mine... wonder when is it going to happen :> | 20:30 |
wiking | so yeah try the j-s stuff and let me know how's it doing for you | 20:31 |
blackburn | sure | 20:31 |
wiking | ah yeah use l2-norm.... | 20:31 |
blackburn | I'm using HOG-like thing | 20:31 |
wiking | mmm | 20:31 |
blackburn | without blocks in fact :) | 20:32 |
wiking | do u have a good (meaning c) implementation of it | 20:32 |
wiking | ? | 20:32 |
blackburn | nope | 20:32 |
blackburn | stole python one from google | 20:32 |
wiking | hahahaha | 20:32 |
wiking | how's that working for you? | 20:32 |
wiking | fast enough? | 20:32 |
blackburn | http://code.google.com/p/python-cvgreyc/source/browse/trunk/cvgreyc/features/HOG.py?r=83 | 20:32 |
blackburn | well svm training takes 3h so I don't care | 20:33 |
blackburn | :D | 20:33 |
blackburn | I do road signs recognition | 20:33 |
wiking | what's your average resolution for a picture? | 20:33 |
blackburn | I resize it to 60x60 | 20:33 |
wiking | because i have like tons of pix (one is a 14 gigs the other is 22 gigs dataset) | 20:33 |
blackburn | oh | 20:34 |
wiking | so i'm already using hadoop cluster to get the features | 20:34 |
blackburn | dataset I use contains 39K training vectors and 12.6K test ones | 20:34 |
wiking | maybe i'll reimplement then this python thingy in opencv | 20:34 |
wiking | ah ok | 20:34 |
wiking | niiice | 20:34 |
blackburn | hmm doesn't opencv have HOG already? | 20:35 |
wiking | mmm | 20:35 |
wiking | they have something specific | 20:35 |
wiking | for pedestrian detection | 20:35 |
blackburn | yeah | 20:35 |
blackburn | I find opencv rather unusable | 20:36 |
wiking | hehehe yeah | 20:36 |
wiking | i'm just using it for some basic shit | 20:36 |
wiking | it's very bloated now | 20:36 |
wiking | and now they have this whole transition of the api | 20:37 |
wiking | and some of the bugs i've encountered... so it gave me quite a headache sometimes | 20:37 |
wiking | but anyhow some basic functions are there, so it's ok | 20:37 |
blackburn | I would be disappointed if JS will be better on my dataset :D | 20:38 |
wiking | hahahaha why? :) | 20:38 |
blackburn | I was near to submit a paper to some local journal here | 20:38 |
blackburn | in russian | 20:38 |
wiking | well | 20:38 |
blackburn | it was about HIK and HOG | 20:38 |
wiking | just change fast the thingy | 20:38 |
wiking | to JS and HOG | 20:38 |
wiking | :>>. | 20:38 |
blackburn | :D yeah but I have formulated multiclass thing | 20:38 |
wiking | you know: s/HIK/JS/g | 20:39 |
wiking | and there you go | 20:39 |
blackburn | the one I said yesterday | 20:39 |
wiking | :) | 20:39 |
blackburn | with O(log) | 20:39 |
wiking | ah | 20:39 |
blackburn | anyway it is pretty obvious | 20:39 |
blackburn | wiking: so JS is the best histogram kernel you know? | 20:40 |
wiking | hehehehe well you know occ.'s razor... the most obvious should be the best | 20:40 |
wiking | blackburn: hehehehe well this is so far giving me the best results | 20:40 |
wiking | i've read now fast several papers on it | 20:40 |
blackburn | I see | 20:40 |
wiking | some people of course like chi2 as well | 20:40 |
wiking | but that said you should do l1-norm on your features before using chi2 kernel | 20:41 |
wiking | makes a big difference :P | 20:41 |
blackburn | I see | 20:41 |
wiking | but yeah i've tried here chi2, hik and now js on my dataset... and js came out the best | 20:41 |
wiking | with LaRank machine | 20:41 |
blackburn | I use GMNP now | 20:41 |
blackburn | but I guess your dataset is too big | 20:42 |
wiking | mmm it's not that bad actually | 20:42 |
wiking | on the end i'm now only having 4k features and 2k samples | 20:42 |
blackburn | ah | 20:42 |
wiking | so it's a pretty small | 20:42 |
blackburn | then try GMNP too if you didn't | 20:42 |
wiking | have | 20:42 |
wiking | LaRank gave me better accuracy | 20:43 |
blackburn | really? | 20:43 |
wiking | well at least with HIK | 20:43 |
wiking | haven't tried now with JS | 20:43 |
blackburn | that's damn strange! | 20:43 |
wiking | i can give it a go... | 20:43 |
wiking | :) | 20:43 |
blackburn | I usually get better results with GMNP | 20:43 |
blackburn | wiking: I just glanced over you blog ;) | 20:46 |
wiking | hahaha | 20:47 |
blackburn | are you hungarian? | 20:47 |
wiking | very un-up-to-date and very not about my research :> | 20:47 |
wiking | sort of | 20:47 |
wiking | born in serbia | 20:47 |
blackburn | serbia? nice | 20:47 |
wiking | ok lemme try gnmp | 20:48 |
wiking | *gmnp that is | 20:48 |
wiking | do i have to switch to vpn | 20:48 |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 20:48 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 20:48 | |
wiking | back | 20:48 |
blackburn | have you ever wondered what is GMNP stands for? :) | 20:49 |
blackburn | I bet the reason why you understand my broken english is you are non-native as well | 20:49 |
blackburn | :D | 20:49 |
wiking | hahahah | 20:49 |
wiking | no worries your english is fine | 20:49 |
wiking | ok running the test... what's gmnp stands for? | 20:51 |
blackburn | hah | 20:52 |
blackburn | it stands for generalized minimal norm problem | 20:52 |
wiking | hehe still training ;) | 20:53 |
wiking | takes longer than larank for sure | 20:53 |
blackburn | yeah it is slower a little | 20:53 |
blackburn | did you tune epsilon? | 20:53 |
wiking | well i set it to the same as larank 1e-5 | 20:53 |
blackburn | in my experience epsilon=1e-2 was ok | 20:53 |
blackburn | ah I see | 20:53 |
wiking | let's see what it does with the same eps | 20:54 |
blackburn | it would train much slower | 20:54 |
wiking | btw in kernels there could be some SIMD instructions | 20:54 |
wiking | so like in case of JS | 20:54 |
wiking | the whole for loop could be parallelized | 20:55 |
blackburn | sure but we have to do it generic.. | 20:55 |
blackburn | in shogun | 20:55 |
wiking | yeah i know but still | 20:55 |
wiking | is there use of OpenMP | 20:55 |
wiking | ? | 20:55 |
blackburn | no, we use pthread | 20:55 |
wiking | so simply pthread? | 20:56 |
wiking | that's kind of hc :> | 20:56 |
wiking | i mean hardcore | 20:56 |
blackburn | yeah, take a look on shogun/converter/LocallyLinearEmbedding.cpp | 20:56 |
blackburn | I would say it is hardcore porn | 20:56 |
blackburn | I bet the most hardcore code is shogun's arpack wrapper written by me as well | 20:57 |
blackburn | shogun/mathematics/arpack.cpp | 20:58 |
blackburn | it embeds superlu, blas, lapack and arpack at the same time | 20:58 |
wiking | hhihihi | 20:58 |
wiking | ok now it's doing the inference... | 20:59 |
blackburn | btw how many classes? | 20:59 |
wiking | 18 | 20:59 |
blackburn | I see | 20:59 |
blackburn | I have 43 | 20:59 |
wiking | mmm | 20:59 |
wiking | if there's difference then it's like within 1% | 21:00 |
wiking | just got the average acc | 21:00 |
wiking | and i cannot recall the digits after the decimal point :P | 21:00 |
blackburn | is it worse? | 21:00 |
blackburn | :D | 21:00 |
wiking | so if then there is like 0.3% difference | 21:00 |
wiking | let's see with another eps | 21:01 |
wiking | anyhow i should do the hog thingy | 21:02 |
wiking | i'm using currently DSIFT features | 21:02 |
wiking | no actually i'm not telling the truth... now it's with affine-sift :P | 21:02 |
blackburn | I'm pretty lame with SIFT still | 21:03 |
blackburn | what is the main difference between SIFT and HOG? | 21:04 |
blackburn | I saw SIFT uses histograms too | 21:04 |
wiking | well | 21:04 |
wiking | first of all sift has a key point detector part as well | 21:04 |
wiking | so it's not just giving you a descriptor | 21:04 |
wiking | but key points (scale invariant) in a pic | 21:05 |
blackburn | so, it finds key points | 21:05 |
blackburn | and then calculates features similar to hog around these points? | 21:05 |
wiking | as said in it's name it should be scale invariant | 21:05 |
blackburn | the most attractive thing for me is that HOG can be formulated as integral image | 21:06 |
wiking | but yeah when u ask for a sift descriptor it's pretty similar with hog | 21:07 |
wiking | but of course the shit part with sift is that it's pattented | 21:07 |
wiking | i mean u can use it for academic purposes but then again | 21:07 |
blackburn | I don't really care about patents for now | 21:07 |
wiking | but yeah i've seen the bag of visual words being patented as well | 21:07 |
wiking | so it's pretty fucked up with USA :> | 21:07 |
blackburn | well even for commercial purpose I can use HOG/SIFT/anyway-patented here :) | 21:08 |
wiking | damn i should implement my MKL thingy | 21:08 |
wiking | hehehe yeah russia is cool | 21:08 |
blackburn | cause I live in snowy nigeria hah | 21:09 |
wiking | and afaik in eu it's the same | 21:09 |
blackburn | do you use mkl for images? | 21:09 |
wiking | wel | 21:09 |
wiking | i have different type of features | 21:09 |
blackburn | I've been thinking about it | 21:09 |
blackburn | ahh I see | 21:09 |
wiking | some of them coming from textual | 21:09 |
wiking | and the textual part is quite sparse | 21:10 |
wiking | so i should actually do some laplacian kernel on the textual (sparse) features | 21:10 |
wiking | and JS on the histogram features | 21:10 |
wiking | i'm hoping that it'd make a difference | 21:10 |
blackburn | sad JS can't be formulated with this O(log N) | 21:10 |
blackburn | as HIK | 21:11 |
wiking | until now i was just concatenating all the features and used a simple polykernel | 21:11 |
wiking | mkl should do better | 21:11 |
blackburn | sure | 21:11 |
wiking | eheheh | 21:11 |
wiking | so | 21:11 |
wiking | 0.7841796875 with eps 1e-5 | 21:11 |
wiking | 0.78515625 with eps 1e-2 | 21:11 |
wiking | so there's a difference | 21:12 |
wiking | but quite insignificant | 21:12 |
blackburn | yeah looks like 3 examples or so? :) | 21:12 |
wiking | yeah something like that | 21:12 |
blackburn | what is larank best? | 21:12 |
wiking | mmm good queston | 21:12 |
wiking | i know it was as well 0.78... | 21:12 |
blackburn | I see | 21:12 |
blackburn | I have best 97.32% accuracy on my data | 21:13 |
wiking | heheheh | 21:13 |
blackburn | hope to get it to 98% | 21:13 |
blackburn | with some improvements | 21:13 |
wiking | what's your test set size? | 21:13 |
blackburn | 12630 | 21:14 |
blackburn | 729 features | 21:15 |
wiking | have u tried deep learning? | 21:15 |
blackburn | nope | 21:16 |
blackburn | yeah it is better on that data I know | 21:16 |
blackburn | but I hope to get similar accuracy with svm | 21:16 |
wiking | i should try deep learning | 21:18 |
wiking | haven't got around it yet | 21:18 |
wiking | theano seems to be a good tool for it | 21:18 |
blackburn | I don't like NNs without any reason :D | 21:20 |
wiking | hahahahha | 21:20 |
wiking | well they'll learn everything... just needs some time | 21:20 |
wiking | and overfit :> | 21:21 |
blackburn | and needs some crazy tuning.. | 21:21 |
wiking | hehheheheh yeps | 21:21 |
wiking | but anyhow it works as well | 21:21 |
wiking | quite well | 21:21 |
blackburn | sure | 21:21 |
wiking | 0.783203125 eps 1e-2 | 21:21 |
blackburn | larank? | 21:22 |
wiking | y | 21:22 |
blackburn | so gmnp was better! | 21:22 |
blackburn | :) | 21:22 |
wiking | hahahaha | 21:22 |
wiking | so yeah my best result on this dataset with many more features but a simple poly kernel is 0.879 | 21:24 |
wiking | so i'm hoping that maybe i can do something now about this with mkl | 21:25 |
blackburn | hmm strange such naive way was so better? | 21:25 |
wiking | but yeah | 21:25 |
wiking | with may more features... | 21:25 |
wiking | now with a poly kernel on the same feature set it's 0.73 with polykernel | 21:26 |
wiking | but yeah JS is cool :>>. | 21:26 |
wiking | but slow :) | 21:26 |
blackburn | it would be unbelievably cool if using JS gave me ~98% accuracy :) | 21:29 |
wiking | hahahhaha | 21:30 |
wiking | well give it a go | 21:30 |
wiking | or it's hard to switch the kernel? | 21:30 |
wiking | 0.783203125 with eps 1e-5 | 21:30 |
blackburn | no, not hard | 21:30 |
wiking | so yeah actually GMNP is better :> | 21:30 |
blackburn | one line to switch | 21:30 |
blackburn | :) | 21:30 |
wiking | than for the tipp | 21:31 |
blackburn | the only problem is that training takes 8900s | 21:31 |
wiking | ahhahaha | 21:31 |
wiking | try then the vlfeat stuff | 21:32 |
wiking | extra coding though... buuut | 21:32 |
wiking | much faster training | 21:32 |
blackburn | which one? | 21:32 |
wiking | because it has JS kernel | 21:32 |
wiking | but used with this homogeneous kernel mapping | 21:32 |
blackburn | hmm how can it help? | 21:32 |
wiking | so it's actually running linear | 21:33 |
blackburn | ah | 21:33 |
wiking | they guy wrote the mapping for both JS and HIK kernel | 21:33 |
wiking | and even has matlab api if u prefer that maybe | 21:34 |
blackburn | I got no time to get into that paper | 21:34 |
blackburn | how can it be possible? | 21:34 |
wiking | what? | 21:34 |
blackburn | -> linear it | 21:34 |
wiking | well as said | 21:35 |
wiking | in that paper i've mentioned | 21:35 |
blackburn | is it creating new features? | 21:35 |
wiking | well mapping the features as far as i understand | 21:35 |
blackburn | explicitly? | 21:35 |
wiking | and then those features are simply fed to a linear svm | 21:35 |
blackburn | I see | 21:36 |
wiking | http://www.vlfeat.org/api/homkermap.html#homkermap-overview | 21:36 |
wiking | according to the paper he not only has faster running time with this of course | 21:36 |
wiking | but sometimes the accuracy is better as well | 21:37 |
blackburn | hmm | 21:38 |
blackburn | size of d(2n+1)?! | 21:43 |
blackburn | why is it faster? | 21:44 |
blackburn | with such vast feature spaces | 21:44 |
wiking | :) | 21:47 |
wiking | well check the paper for running times | 21:47 |
wiking | same or better accuracy with 1/7 training time | 21:49 |
blackburn | I just can't understand how can it be faster than Maji's approximation | 21:53 |
blackburn | afaik it doesn't depend on number of support vectors.. | 21:55 |
-!- dfrx [~f-x@inet-hqmc06-o.oracle.com] has joined #shogun | 22:08 | |
wiking | yeees | 22:35 |
wiking | one paper accepted | 22:35 |
wiking | ! | 22:35 |
wiking | :)) | 22:35 |
wiking | mmm i wasn't expecting the result of this one yet... so let's see how the other paper will do...fingerzcrossed!! | 22:40 |
-!- n4nd0 [~androirc@s83-179-44-135.cust.tele2.se] has joined #shogun | 22:56 | |
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 22:59 | |
nando_ | blackburn, hey! | 23:00 |
-!- n4nd0 [~androirc@s83-179-44-135.cust.tele2.se] has quit [Client Quit] | 23:00 | |
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has left #shogun [] | 23:00 | |
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 23:00 | |
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has left #shogun [] | 23:01 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 23:02 | |
blackburn | n4nd0: hey | 23:25 |
blackburn | wiking: congrats! | 23:26 |
wiking | blackburn: cheers | 23:26 |
n4nd0 | blackburn, I faced some trouble trying to compile shogun with support for cplex | 23:27 |
blackburn | n4nd0: okay, are there any errors? | 23:27 |
n4nd0 | n4nd0, yeah, I had to make a couple of changes in the configure file to detect my cplex version | 23:28 |
n4nd0 | blackburn, I don't know if I screwed something when doing that, but before it was not detecting cplex when doing configure | 23:28 |
blackburn | hmm that's strange | 23:29 |
n4nd0 | blackburn, and well after that ... there were quite a few of compile errors | 23:29 |
blackburn | let me check whether it is being detected here | 23:29 |
n4nd0 | you think is strange that I needed to change the configure? | 23:29 |
blackburn | sure | 23:29 |
n4nd0 | I read something on that in a forum, it think that the configure script is prepared to work out of fox with version 9 of cplex | 23:30 |
blackburn | oh never thought it is proprietary :D | 23:31 |
blackburn | okay lets check what you have changed? | 23:31 |
n4nd0 | all right | 23:32 |
blackburn | I guess it would take more time to install/test things here than just 'use' you :) | 23:32 |
n4nd0 | http://snipt.org/uIfg6 | 23:34 |
n4nd0 | I commented three lines there and just did minor changes | 23:34 |
n4nd0 | I was also surprised with the software being proprietary :-O | 23:34 |
n4nd0 | and took a long while to get the trial version for students! | 23:35 |
blackburn | okay sure it is hardcoded now | 23:35 |
blackburn | so you had to fix it | 23:35 |
n4nd0 | the lines I added are the ones that are just above the commented ones | 23:35 |
blackburn | so, now it is being detected right? | 23:36 |
n4nd0 | yes :-) | 23:36 |
n4nd0 | that was the good part | 23:36 |
blackburn | okay then lets check errors | 23:36 |
n4nd0 | the bad part ... when I configure with support for cplex | 23:36 |
n4nd0 | make explodes :-O | 23:36 |
n4nd0 | so a common error I found in several files | 23:37 |
blackburn | is it a nuclear explosion or just like some bomb? | 23:37 |
blackburn | :D | 23:37 |
n4nd0 | :-P still alive around here | 23:38 |
n4nd0 | so this common error, like in classifier/SubGradientLPM.cpp | 23:38 |
n4nd0 | there were quite a bit references to a class called CLinearClassifier | 23:39 |
n4nd0 | that was not found, it doesn't appear in the doc either | 23:40 |
blackburn | hmmmmmmm | 23:40 |
blackburn | wait | 23:40 |
blackburn | haha | 23:40 |
blackburn | indeed | 23:40 |
n4nd0 | I changed those to CLinearMachine ... don't know if that was a good decision | 23:40 |
blackburn | yeah it was obviously right decision | 23:40 |
blackburn | let me change it | 23:40 |
n4nd0 | cool | 23:40 |
n4nd0 | as far I remember those were in | 23:41 |
n4nd0 | SubGradientLPM header and source | 23:41 |
blackburn | LPBoost too | 23:41 |
n4nd0 | yeah | 23:41 |
n4nd0 | ah and another one in SubGradientLPM.h | 23:41 |
blackburn | hmm | 23:42 |
n4nd0 | the include of qpbsvmlib.h | 23:42 |
n4nd0 | the file is called QPBSVMLib.h | 23:42 |
blackburn | do you know how to use github pull requests? | 23:42 |
n4nd0 | yeah | 23:42 |
blackburn | I just thought you could do it :) | 23:43 |
n4nd0 | sure, cool | 23:43 |
blackburn | and I would just merge it | 23:43 |
blackburn | LinearClassifier and LinearMachine is an issue related to some transition we have made an year ago | 23:43 |
n4nd0 | ok | 23:43 |
blackburn | to generalize classification/regression things | 23:44 |
blackburn | we have renamed Classifier to Machine | 23:44 |
blackburn | that was damn ugly to Regression derived from Classification :) | 23:44 |
blackburn | and the reason why we haven't detected it is obvious too :) | 23:45 |
n4nd0 | :) | 23:45 |
n4nd0 | there were also some issues with classifier/svm/CPLEXSVM* | 23:46 |
n4nd0 | and mathematics/Cplex* | 23:46 |
n4nd0 | can you reproduce the compile error? | 23:46 |
n4nd0 | ping | 23:58 |
blackburn | here | 23:59 |
blackburn | n4nd0: not really | 23:59 |
--- Log closed Sat Feb 11 00:00:16 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!