IRC logs of #shogun for Friday, 2012-02-10

--- Log opened Fri Feb 10 00:00:19 2012
n4nd0I am getting lot of compile errors to make shogun with cplex00:00
n4nd0not just in the examples but also in more important files such as mathematics/Cplex00:01
n4nd0is that normal?00:01
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Remote host closed the connection]00:05
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun00:05
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]00:19
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun00:19
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]02:45
-!- dfrx [~f-x@inet-hqmc07-o.oracle.com] has joined #shogun05:27
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 240 seconds]07:55
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun07:55
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun07:59
-!- CIA-11 [~CIA@cia.atheme.org] has quit [Ping timeout: 416 seconds]08:16
-!- CIA-18 [~CIA@cia.atheme.org] has joined #shogun08:16
-!- shogun-buildbot_ [~shogun-bu@7nn.de] has joined #shogun08:21
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 252 seconds]08:26
-!- wiking [~wiking@78-22-115-59.access.telenet.be] has joined #shogun08:35
-!- wiking [~wiking@78-22-115-59.access.telenet.be] has quit [Changing host]08:35
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun08:35
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]08:45
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 244 seconds]09:06
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun09:06
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun09:19
-!- Netsplit *.net <-> *.split quits: naywhayare09:29
-!- Netsplit over, joins: naywhayare09:32
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 265 seconds]09:37
-!- Netsplit *.net <-> *.split quits: @sonney2k, CIA-18, shogun-buildbot_, naywhayare, wiking, dfrx09:55
-!- Netsplit over, joins: naywhayare, shogun-buildbot_, CIA-18, dfrx, @sonney2k09:55
-!- naywhaya1e [~ryan@spoon.lugatgt.org] has joined #shogun09:59
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun10:01
-!- naywhayare [~ryan@spoon.lugatgt.org] has quit [Ping timeout: 252 seconds]10:02
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]10:48
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun10:48
-!- n4nd0 [~n4nd0@n145-p102.kthopen.kth.se] has joined #shogun10:50
-!- dfrx [~f-x@inet-hqmc07-o.oracle.com] has quit [Quit: Leaving.]12:41
-!- n4nd0 [~n4nd0@n145-p102.kthopen.kth.se] has quit [Quit: Leaving]12:51
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun15:56
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 276 seconds]16:00
-!- wiking_ is now known as wiking16:00
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]16:02
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun16:02
CIA-18shogun: Soeren Sonnenburg master * rdb752f7 / (6 files in 2 dirs):17:27
CIA-18shogun: Merge pull request #368 from vigsterkr/master17:27
CIA-18shogun: Add Jensen-Shannon kernel - http://git.io/hOxavQ17:27
CIA-18shogun: Viktor Gal master * re0a1155 / (6 files in 2 dirs):17:27
CIA-18shogun: Add Jensen-Shanon kernel This patch adds a CDotKernel based Jensen-Shannon17:27
CIA-18shogun: kernel to shogun-toolbox. Sources of modular interface has been changed so it17:27
CIA-18shogun: can be used via the modular interfaces as well. - http://git.io/z196rQ17:27
CIA-18shogun: Viktor Gal master * r51fcf29 / src/shogun/kernel/JensenShannonKernel.cpp : Change to CMath::log2 in JensenShannonKernel - http://git.io/MMk_jQ17:27
wikingyeey my first patch applied to HEAD \o/17:35
-!- naywhaya1e is now known as naywhayare18:00
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun19:12
n4nd0hi there19:13
n4nd0can someone help me with cplex installation?19:14
n4nd0I have found quite a bit of compile errors19:14
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun20:04
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 245 seconds]20:05
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 252 seconds]20:08
-!- wiking_ is now known as wiking20:08
-!- blackburn [~qdrgsm@83.234.54.44] has joined #shogun20:14
blackburnwiking: is this J-S kernel better on histograms?20:15
wikingblackburn: for my data set definitely20:15
blackburnwhat is your dataset?20:16
wikingthere's a +4% on average accuracy20:16
blackburnbetter than HIK?20:16
wikingbut then again it's quite slower as well :P20:16
wikingyep better than HIK20:16
blackburnhuh20:16
blackburn!20:16
blackburninteresting20:16
wikingbut as said i think it very much depends on your dataset as well20:17
wikingas always though :P20:17
wikingbut i have a simple bag of visual words setup20:18
wikingand if i l2-norm the tf vectors20:18
wikingjs is better than hik20:18
blackburnI've used HOG with HIK, was better than linear of course20:18
blackburnbut I didn't know anything about js kernel20:18
wikingand about the train/infer speed... it could be easily make it in a way to use linear classification20:19
wikinghttp://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCYQFjAA&url=http%3A%2F%2Fwww.vlfeat.org%2F~vedaldi%2Fassets%2Fpubs%2Fvedaldi11efficient.pdf&ei=uG01T6qzJcui-gbWwfSSAg&usg=AFQjCNEOGMdgVs0gaPgm60i76bNakkyicQ&sig2=TAz11bIm2OAogWtcp8fACQ20:19
wikingbut it requires some extra coding for shogun...20:19
blackburnbtw are you a student?20:19
wikingye phd20:19
wikingso the title is: Efficient Additive Kernels via Explicit Feature Maps20:19
blackburnyeah seen that20:19
wikingpart of vlfeat project...20:20
wikingso there's already code of course20:20
wikingi mean i was already surprised about HIK performance...20:20
blackburnI'm just curious whether you are going to apply as gsoc student :)20:20
wikinghahahahaha20:20
wikingi don't know20:20
wikingdid once a GSoC project20:21
wikingwas fun20:21
wikingbut last year was weird20:21
blackburnwhich one?20:21
wikingi've applied to a gsoc project in opencv proj... and didn't got it... and on the end the whole gsoc project just failed; it seems the guy who got it just went off...20:21
wikingso it's a shame20:22
wikingfor shogun it'd be great to add some new machines as part of gsoc project20:22
blackburnso you wasn't a gsoc student?20:22
blackburnweren't sorry :D20:22
wikinglast year not20:22
wikingi was in 200920:22
blackburnoh I see20:22
blackburnI was shogun gsoc student last year20:23
wikingwhat have you've done?20:24
blackburndim reduction20:24
wikingah cool, the k(pac) part?20:24
wiking*pca20:24
wikingso i've ment (k)pca20:24
wiking:P20:24
blackburnmainly LLE and similar once20:24
blackburnones20:24
blackburn:D20:24
wiking:>20:24
blackburnhttp://shogun-toolbox.org/edrt/20:25
wikingi've seen on last year's project ideas that there was an interest in doing latent s-svm20:25
wikingoh fuck20:25
wikingniice20:25
blackburnrecently I've submitted a paper on that thing20:26
wikingisn't it merged into shogun master branch?20:26
blackburnwell merged.. :)20:26
wikingon which thing?20:26
wikingyou mean on your gsoc proj?20:26
blackburnnot really, I called it toolkit20:26
blackburnand said it is integrated to shogun20:27
blackburnhttp://dl.dropbox.com/u/10139213/shogun/lisitsyn12a.pdf20:27
wikingah cool20:28
wikingi'll go through it20:28
blackburnnothing interesting :)20:28
wikingheheh20:29
wikingniiice it's a jmlr article20:29
wikingi'm working now on a ieee transaction on multimedia paper... and will try to submit it to pami20:29
wikingbut my hopes aren't so ... high :P20:30
blackburnit is my first paper _ever_ so I hope it would be accepted :)20:30
wikinghihi20:30
wikingtoday i should get as well a notification for a paper of mine... wonder when is it going to happen :>20:30
wikingso yeah try the j-s stuff and let me know how's it doing for you20:31
blackburnsure20:31
wikingah yeah use l2-norm....20:31
blackburnI'm using HOG-like thing20:31
wikingmmm20:31
blackburnwithout blocks in fact :)20:32
wikingdo u have a good (meaning c) implementation of it20:32
wiking?20:32
blackburnnope20:32
blackburnstole python one from google20:32
wikinghahahaha20:32
wikinghow's that working for you?20:32
wikingfast enough?20:32
blackburnhttp://code.google.com/p/python-cvgreyc/source/browse/trunk/cvgreyc/features/HOG.py?r=8320:32
blackburnwell svm training takes 3h so I don't care20:33
blackburn:D20:33
blackburnI do road signs recognition20:33
wikingwhat's your average resolution for a picture?20:33
blackburnI resize it to 60x6020:33
wikingbecause i have like tons of pix (one is a 14 gigs the other is 22 gigs dataset)20:33
blackburnoh20:34
wikingso i'm already using hadoop cluster to get the features20:34
blackburndataset I use contains 39K training vectors and 12.6K test ones20:34
wikingmaybe i'll reimplement then this python thingy in opencv20:34
wikingah ok20:34
wikingniiice20:34
blackburnhmm doesn't opencv have HOG already?20:35
wikingmmm20:35
wikingthey have something specific20:35
wikingfor pedestrian detection20:35
blackburnyeah20:35
blackburnI find opencv rather unusable20:36
wikinghehehe yeah20:36
wikingi'm just using it for some basic shit20:36
wikingit's very bloated now20:36
wikingand now they have this whole transition of the api20:37
wikingand some of the bugs i've encountered... so it gave me quite a headache sometimes20:37
wikingbut anyhow some basic functions are there, so it's ok20:37
blackburnI would be disappointed if JS will be better on my dataset :D20:38
wikinghahahaha why? :)20:38
blackburnI was near to submit a paper to some local journal here20:38
blackburnin russian20:38
wikingwell20:38
blackburnit was about HIK and HOG20:38
wikingjust change fast the thingy20:38
wikingto JS and HOG20:38
wiking:>>.20:38
blackburn:D yeah but I have formulated multiclass thing20:38
wikingyou know: s/HIK/JS/g20:39
wikingand there you go20:39
blackburnthe one I said yesterday20:39
wiking:)20:39
blackburnwith O(log)20:39
wikingah20:39
blackburnanyway it is pretty obvious20:39
blackburnwiking: so JS is the best histogram kernel you know?20:40
wikinghehehehe well you know occ.'s razor... the most obvious should be the best20:40
wikingblackburn: hehehehe well this is so far giving me the best results20:40
wikingi've read now fast several papers on it20:40
blackburnI see20:40
wikingsome people of course like chi2 as well20:40
wikingbut that said you should do l1-norm on your features before using chi2 kernel20:41
wikingmakes a big difference :P20:41
blackburnI see20:41
wikingbut yeah i've tried here chi2, hik and now js on my dataset... and js came out the best20:41
wikingwith LaRank machine20:41
blackburnI use GMNP now20:41
blackburnbut I guess your dataset is too big20:42
wikingmmm it's not that bad actually20:42
wikingon the end i'm now only having 4k features and 2k samples20:42
blackburnah20:42
wikingso it's a pretty small20:42
blackburnthen try GMNP too if you didn't20:42
wikinghave20:42
wikingLaRank gave me better accuracy20:43
blackburnreally?20:43
wikingwell at least with HIK20:43
wikinghaven't tried now with JS20:43
blackburnthat's damn strange!20:43
wikingi can give it a go...20:43
wiking:)20:43
blackburnI usually get better results with GMNP20:43
blackburnwiking: I just glanced over you blog ;)20:46
wikinghahaha20:47
blackburnare you hungarian?20:47
wikingvery un-up-to-date and very not about my research :>20:47
wikingsort of20:47
wikingborn in serbia20:47
blackburnserbia? nice20:47
wikingok lemme try gnmp20:48
wiking*gmnp that is20:48
wikingdo i have to switch to vpn20:48
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]20:48
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun20:48
wikingback20:48
blackburnhave you ever wondered what is GMNP stands for? :)20:49
blackburnI bet the reason why you understand my broken english is you are non-native as well20:49
blackburn:D20:49
wikinghahahah20:49
wikingno worries your english is fine20:49
wikingok running the test... what's gmnp stands for?20:51
blackburnhah20:52
blackburnit stands for generalized minimal norm problem20:52
wikinghehe still training ;)20:53
wikingtakes longer than larank for sure20:53
blackburnyeah it is slower a little20:53
blackburndid you tune epsilon?20:53
wikingwell i set it to the same as larank 1e-520:53
blackburnin my experience epsilon=1e-2 was ok20:53
blackburnah I see20:53
wikinglet's see what it does with the same eps20:54
blackburnit would train much slower20:54
wikingbtw in kernels there could be some SIMD instructions20:54
wikingso like in case of JS20:54
wikingthe whole for loop could be parallelized20:55
blackburnsure but we have to do it generic..20:55
blackburnin shogun20:55
wikingyeah i know but still20:55
wikingis there use of OpenMP20:55
wiking?20:55
blackburnno, we use pthread20:55
wikingso simply pthread?20:56
wikingthat's kind of hc :>20:56
wikingi mean hardcore20:56
blackburnyeah, take a look on shogun/converter/LocallyLinearEmbedding.cpp20:56
blackburnI would say it is hardcore porn20:56
blackburnI bet the most hardcore code is shogun's arpack wrapper written by me as well20:57
blackburnshogun/mathematics/arpack.cpp20:58
blackburnit embeds superlu, blas, lapack and arpack at the same time20:58
wikinghhihihi20:58
wikingok now it's doing the inference...20:59
blackburnbtw how many classes?20:59
wiking1820:59
blackburnI see20:59
blackburnI have 4320:59
wikingmmm20:59
wikingif there's difference then it's like within 1%21:00
wikingjust got the average acc21:00
wikingand i cannot recall the digits after the decimal point :P21:00
blackburnis it worse?21:00
blackburn:D21:00
wikingso if then there is like 0.3% difference21:00
wikinglet's see with another eps21:01
wikinganyhow i should do the hog thingy21:02
wikingi'm using currently DSIFT features21:02
wikingno actually i'm not telling the truth... now it's with affine-sift :P21:02
blackburnI'm pretty lame with SIFT still21:03
blackburnwhat is the main difference between SIFT and HOG?21:04
blackburnI saw SIFT uses histograms too21:04
wikingwell21:04
wikingfirst of all sift has a key point detector part as well21:04
wikingso it's not just giving you a descriptor21:04
wikingbut key points (scale invariant) in a pic21:05
blackburnso, it finds key points21:05
blackburnand then calculates features similar to hog around these points?21:05
wikingas said in it's name it should be scale invariant21:05
blackburnthe most attractive thing for me is that HOG can be formulated as integral image21:06
wikingbut yeah when u ask for a sift descriptor it's pretty similar with hog21:07
wikingbut of course the shit part with sift is that it's pattented21:07
wikingi mean u can use it for academic purposes but then again21:07
blackburnI don't really care about patents for now21:07
wikingbut yeah i've seen the bag of visual words being patented as well21:07
wikingso it's pretty fucked up with USA  :>21:07
blackburnwell even for commercial purpose I can use HOG/SIFT/anyway-patented here :)21:08
wikingdamn i should implement my MKL thingy21:08
wikinghehehe yeah russia is cool21:08
blackburncause I live in snowy nigeria hah21:09
wikingand afaik in eu it's the same21:09
blackburndo you use mkl for images?21:09
wikingwel21:09
wikingi have different type of features21:09
blackburnI've been thinking about it21:09
blackburnahh I see21:09
wikingsome of them coming from textual21:09
wikingand the textual part is quite sparse21:10
wikingso i should actually do some laplacian kernel on the textual (sparse) features21:10
wikingand JS on the histogram features21:10
wikingi'm hoping that it'd make a difference21:10
blackburnsad JS can't be formulated with this O(log N)21:10
blackburnas HIK21:11
wikinguntil now i was just concatenating all the features and used a simple polykernel21:11
wikingmkl should do better21:11
blackburnsure21:11
wikingeheheh21:11
wikingso21:11
wiking0.7841796875 with eps 1e-521:11
wiking0.78515625 with eps 1e-221:11
wikingso there's a difference21:12
wikingbut quite insignificant21:12
blackburnyeah looks like 3 examples or so? :)21:12
wikingyeah something like that21:12
blackburnwhat is larank best?21:12
wikingmmm good queston21:12
wikingi know it was as well 0.78...21:12
blackburnI see21:12
blackburnI have best 97.32% accuracy on my data21:13
wikingheheheh21:13
blackburnhope to get it to 98%21:13
blackburnwith some improvements21:13
wikingwhat's your test set size?21:13
blackburn1263021:14
blackburn729 features21:15
wikinghave u tried deep learning?21:15
blackburnnope21:16
blackburnyeah it is better on that data I know21:16
blackburnbut I hope to get similar accuracy with svm21:16
wikingi should try deep learning21:18
wikinghaven't got around it yet21:18
wikingtheano seems to be a good tool for it21:18
blackburnI don't like NNs without any reason :D21:20
wikinghahahahha21:20
wikingwell they'll learn everything... just needs some time21:20
wikingand overfit :>21:21
blackburnand needs some crazy tuning..21:21
wikinghehheheheh yeps21:21
wikingbut anyhow it works as well21:21
wikingquite well21:21
blackburnsure21:21
wiking0.783203125 eps 1e-221:21
blackburnlarank?21:22
wikingy21:22
blackburnso gmnp was better!21:22
blackburn:)21:22
wikinghahahaha21:22
wikingso yeah my best result on this dataset with many more features but a simple poly kernel is 0.87921:24
wikingso i'm hoping that maybe i can do something now about this with mkl21:25
blackburnhmm strange such naive way was so better?21:25
wikingbut yeah21:25
wikingwith may more features...21:25
wikingnow with a poly kernel on the same feature set it's 0.73 with polykernel21:26
wikingbut yeah JS is cool :>>.21:26
wikingbut slow :)21:26
blackburnit would be unbelievably cool if using JS gave me ~98% accuracy :)21:29
wikinghahahhaha21:30
wikingwell give it a go21:30
wikingor it's hard to switch the kernel?21:30
wiking0.783203125 with eps 1e-521:30
blackburnno, not hard21:30
wikingso yeah actually GMNP is better :>21:30
blackburnone line to switch21:30
blackburn:)21:30
wikingthan for the tipp21:31
blackburnthe only problem is that training takes 8900s21:31
wikingahhahaha21:31
wikingtry then the vlfeat stuff21:32
wikingextra coding though... buuut21:32
wikingmuch faster training21:32
blackburnwhich one?21:32
wikingbecause it has JS kernel21:32
wikingbut used with this homogeneous kernel mapping21:32
blackburnhmm how can it help?21:32
wikingso it's actually running linear21:33
blackburnah21:33
wikingthey guy wrote the mapping for both JS and HIK kernel21:33
wikingand even has matlab api if u prefer that maybe21:34
blackburnI got no time to get into that paper21:34
blackburnhow can it be possible?21:34
wikingwhat?21:34
blackburn-> linear it21:34
wikingwell as said21:35
wikingin that paper i've mentioned21:35
blackburnis it creating new features?21:35
wikingwell mapping the features as far as i understand21:35
blackburnexplicitly?21:35
wikingand then those features are simply fed to a linear svm21:35
blackburnI see21:36
wikinghttp://www.vlfeat.org/api/homkermap.html#homkermap-overview21:36
wikingaccording to the paper he not only has faster running time with this of course21:36
wikingbut sometimes the accuracy is better as well21:37
blackburnhmm21:38
blackburnsize of d(2n+1)?!21:43
blackburnwhy is it faster?21:44
blackburnwith such vast feature spaces21:44
wiking:)21:47
wikingwell check the paper for running times21:47
wikingsame or better accuracy with 1/7 training time21:49
blackburnI just can't understand how can it be faster than Maji's approximation21:53
blackburnafaik it doesn't depend on number of support vectors..21:55
-!- dfrx [~f-x@inet-hqmc06-o.oracle.com] has joined #shogun22:08
wikingyeees22:35
wikingone paper accepted22:35
wiking!22:35
wiking:))22:35
wikingmmm i wasn't expecting the result of this one yet... so let's see how the other paper will do...fingerzcrossed!!22:40
-!- n4nd0 [~androirc@s83-179-44-135.cust.tele2.se] has joined #shogun22:56
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun22:59
nando_blackburn, hey!23:00
-!- n4nd0 [~androirc@s83-179-44-135.cust.tele2.se] has quit [Client Quit]23:00
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has left #shogun []23:00
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun23:00
-!- nando_ [~n4nd0@s83-179-44-135.cust.tele2.se] has left #shogun []23:01
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun23:02
blackburnn4nd0: hey23:25
blackburnwiking: congrats!23:26
wikingblackburn: cheers23:26
n4nd0blackburn, I faced some trouble trying to compile shogun with support for cplex23:27
blackburnn4nd0: okay, are there any errors?23:27
n4nd0n4nd0, yeah, I had to make a couple of changes in the configure file to detect my cplex version23:28
n4nd0blackburn, I don't know if I screwed something when doing that, but before it was not detecting cplex when doing configure23:28
blackburnhmm that's strange23:29
n4nd0blackburn, and well after that ... there were quite a few of compile errors23:29
blackburnlet me check whether it is being detected here23:29
n4nd0you think is strange that I needed to change the configure?23:29
blackburnsure23:29
n4nd0I read something on that in a forum, it think that the configure script is prepared to work out of fox with version 9 of cplex23:30
blackburnoh never thought it is proprietary :D23:31
blackburnokay lets check what you have changed?23:31
n4nd0all right23:32
blackburnI guess it would take more time to install/test things here than just 'use' you :)23:32
n4nd0http://snipt.org/uIfg623:34
n4nd0I commented three lines there and just did minor changes23:34
n4nd0I was also surprised with the software being proprietary :-O23:34
n4nd0and took a long while to get the trial version for students!23:35
blackburnokay sure it is hardcoded now23:35
blackburnso you had to fix it23:35
n4nd0the lines I added are the ones that are just above the commented ones23:35
blackburnso, now it is being detected right?23:36
n4nd0yes :-)23:36
n4nd0that was the good part23:36
blackburnokay then lets check errors23:36
n4nd0the bad part ... when I configure with support for cplex23:36
n4nd0make explodes :-O23:36
n4nd0so a common error I found in several files23:37
blackburnis it a nuclear explosion or just like some bomb?23:37
blackburn:D23:37
n4nd0:-P still alive around here23:38
n4nd0so this common error, like in classifier/SubGradientLPM.cpp23:38
n4nd0there were quite a bit references to a class called CLinearClassifier23:39
n4nd0that was not found, it doesn't appear in the doc either23:40
blackburnhmmmmmmm23:40
blackburnwait23:40
blackburnhaha23:40
blackburnindeed23:40
n4nd0I changed those to CLinearMachine ... don't know if that was a good decision23:40
blackburnyeah it was obviously right decision23:40
blackburnlet me change it23:40
n4nd0cool23:40
n4nd0as far I remember those were in23:41
n4nd0SubGradientLPM header and source23:41
blackburnLPBoost too23:41
n4nd0yeah23:41
n4nd0ah and another one in SubGradientLPM.h23:41
blackburnhmm23:42
n4nd0the include of qpbsvmlib.h23:42
n4nd0the file is called QPBSVMLib.h23:42
blackburndo you know how to use github pull requests?23:42
n4nd0yeah23:42
blackburnI just thought you could do it :)23:43
n4nd0sure, cool23:43
blackburnand I would just merge it23:43
blackburnLinearClassifier and LinearMachine is an issue related to some transition we have made an year ago23:43
n4nd0ok23:43
blackburnto generalize classification/regression things23:44
blackburnwe have renamed Classifier to Machine23:44
blackburnthat was damn ugly to Regression derived from Classification :)23:44
blackburnand the reason why we haven't detected it is obvious too :)23:45
n4nd0:)23:45
n4nd0there were also some issues with classifier/svm/CPLEXSVM*23:46
n4nd0and mathematics/Cplex*23:46
n4nd0can you reproduce the compile error?23:46
n4nd0ping23:58
blackburnhere23:59
blackburnn4nd0: not really23:59
--- Log closed Sat Feb 11 00:00:16 2012

Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!