IRC logs of #shogun for Monday, 2012-04-09

--- Log opened Mon Apr 09 00:00:19 2012
PhilTilletblackburn, I rebased all my commit cause I thought one would be clearer, was it a mistake? (it seems to me that the big commit made things more unclear than anything else :p)00:14
blackburnyeah probably no need to do that00:14
blackburnhowever not so bad00:14
PhilTilletblackburn, this year, the ViennaCL GSoC Idea is on solving the Eigenvalue Problem on GPU :p So in september it should be possible to integrate Hardware Accelerated dimension reduction into Shogun.00:18
blackburnhmm nice00:18
blackburnPhilTillet: are you applying for viennacl as well?00:19
PhilTilletblackburn, no, I applied this year but to be honest was rejected (3 slots for 70 applications :D)00:19
PhilTilletbut I ended up working directly with Technical University of Vienna00:20
PhilTilletto another "Summer of Code", but organized directly by their university00:20
blackburnah I see00:20
blackburnyou mean you applied last year?00:20
PhilTillet(this is how I was able to write a paper etc..)00:21
blackburnwhy didn't you applied this year then?00:21
PhilTilletCause I wanted to do machine learning00:21
PhilTillet:p00:21
blackburnI see00:21
PhilTilletthere was not a lot of C++/Machine Learning projects :p00:21
blackburnone I guess00:22
PhilTilletyes, did not find any other organization00:23
PhilTilletwell to a smaller extent there was also OpenCV, but their main focus is not Machine Learning00:25
PhilTilletblackburn, yes, I mean I applied last year earlier (didn't notice the typo :p)00:31
PhilTilletonly applying for Shogun this year..00:31
PhilTilletI think I'm gonna listen to some Azis and then go to bed YaY00:32
blackburnlol00:33
PhilTilletlast year this was the Rebecca Black period00:33
PhilTillethttp://www.youtube.com/watch?v=kfVsfOSbJY000:35
blackburnoh that is too gay00:35
PhilTilletshe said in an interview that she was crazy about Justin Bieber :D00:36
blackburnoh that is awful song00:37
blackburn:D00:37
PhilTilletThe best is the lyrics :D00:37
blackburnher voice is $#$#@00:37
PhilTillet"Yesterday was thursday, today is Friday, tomorrow is Saturday, and Sunday, comes afterwards"00:37
blackburnwe gonna have a ball today00:38
blackburnwtf00:38
blackburnare they talking about balls?00:38
blackburn:D00:38
PhilTilletI don't  know :D I have to admit that it sounds like00:38
PhilTilletIndeed the official lyrics are "we gonna have a ball today".00:39
blackburnI am not any adult but this is way too 'teenagy' :D00:41
PhilTillet:D00:42
-!- harshit_ [~harshit@182.68.113.64] has joined #shogun00:46
harshit_blackburn: Hi !00:51
blackburnhi00:51
harshit_hey, I havn't done any thing in last 2 days, as in related to shogun00:51
harshit_coz not able to fully understand what needs to be done in LBP features00:52
blackburnmay be I can suggest anything else?00:52
harshit_blackburn: Could you suggest me somewhat easier task to do in mean while00:52
blackburnaham00:52
harshit_Do you want to see s3vm in shogun ?00:53
harshit_or co-clustering00:54
harshit_or any other semi-supervised learning algo00:54
blackburnwhy not if it is easier :)00:54
harshit_So first step would be to find a functional library..00:55
harshit_Also I think I have a small experience with Classifiers so that would be easy ..00:56
harshit_blackburn: I'll mail you some of the implementations of S3VM tomorrow, plz have alook and tell me whether it would worth for me to spend time on it !01:02
harshit_For now,Bye01:02
blackburnok sure01:02
blackburnbye01:02
-!- harshit_ [~harshit@182.68.113.64] has quit [Quit: Leaving]01:04
PhilTilletblackburn,  where are you from by the way?01:17
blackburnblackburn: samara just like genix01:17
PhilTilletoooh you are neighbours :p01:18
PhilTilletand what time is it there??01:18
blackburnyes01:18
blackburn03-1801:18
blackburnI shall go to bed probably :D01:18
PhilTillet:D01:18
PhilTilletI think people are always more productive during night01:19
blackburnyeah probably01:19
blackburnhowever I still need to wake up in 6 hrs01:19
blackburnand will be non-productive next morning :)01:19
PhilTilletthat's a point :D01:19
blackburnokay see you01:20
PhilTilletsee you01:20
blackburnit is night in france as well so good night :)01:20
-!- blackburn [~qdrgsm@83.234.54.186] has quit [Quit: Leaving.]01:20
-!- PhilTillet [~Philippe@vir78-1-82-232-38-145.fbx.proxad.net] has quit [Quit: Leaving]01:25
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]02:24
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun06:41
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has joined #shogun08:46
-!- V[i]ctor [~victor@host-176-100-246-254.masterbit.su] has joined #shogun09:46
n4nd0blackburn: let's see if I succeed with the cover tree mission :P10:07
blackburnn4nd0: oh that's painful10:09
blackburndamn ich moechte shlafen10:10
n4nd0blackburn: are you tired?10:11
blackburnyeah want to sleeep10:12
n4nd0blackburn: first I am going to test if JL's implementation is actually faster using their tests and comparing to our unbeatable quick sort :)10:12
blackburnyou should note that as I said before qsort is slower in the lle10:12
n4nd0yes10:13
blackburnthat is something impossible actually - it seems that in your test10:13
n4nd0yeah ... I don't understand why it turns out to be faster in LLE10:14
blackburnqsort had something like O(Nlog N)10:14
n4nd0there is something fishy10:14
blackburnit is true but there are N vectors or so10:14
blackburnso it should be O(N^2 log N)10:14
n4nd0cover tree?10:15
n4nd0what do you mean it should be O(N? log(N))?10:15
blackburnin the LLE at least10:15
blackburnI mean if there are N vectors10:15
blackburnand we need a neighbor for each of them10:16
blackburnit would be N*O(N log N)10:16
blackburnwith qsort10:16
blackburnand should be N*O(log N) with covertree10:16
n4nd0yes10:16
n4nd0but I guess that the implementation of our tree is not that efficient10:16
n4nd0or?10:16
blackburnyes but O should stay the same10:17
blackburnif it is not log N - wtf it is not a covertree then :D10:17
blackburnhowever it can be that10:17
n4nd0that's why I want to make this test using JL's impl. first10:18
n4nd0if quick sort is still slower, we better ignore  it10:18
blackburnmakes sensse10:18
blackburnhmm I'm curious whether they will wake me up if I fall asleep lool10:19
n4nd0haha10:20
n4nd0are you in the job?10:20
blackburnyeah10:22
blackburnI'm going to leave it actually10:22
blackburnin june10:23
n4nd0you focus on GSoC then10:24
blackburnright10:24
n4nd0have to go now10:28
n4nd0bye10:28
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving]10:29
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun10:51
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]10:59
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun10:59
-!- PSmitAalto [82e9b263@gateway/web/freenode/ip.130.233.178.99] has joined #shogun11:08
blackburnwiking: have you seen lecun's paper on EBM?11:14
blackburnenergy based models11:14
wikingi've seen one energy based model11:14
wikingbut i'm not sure if it's the same11:14
wikingdo u have full title or link?11:14
blackburnyeah11:14
blackburnhttp://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf11:14
blackburnit looks like kind of generalization for all the latent and SO stuff11:15
wikingi have this among my papers on my comp: Efficient Learning of Sparse Representations with an Energy-Based Model11:15
blackburnhttps://groups.google.com/forum/?fromgroups#!forum/google-summer-of-code-discuss11:23
blackburnwhoa how stupid it is11:23
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has quit [Ping timeout: 245 seconds]11:36
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun11:38
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has joined #shogun11:43
wikingsonney2k: here?11:59
-!- wiking_ [~wiking@78-23-191-201.access.telenet.be] has joined #shogun12:01
-!- wiking_ [~wiking@78-23-191-201.access.telenet.be] has quit [Changing host]12:01
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun12:01
-!- wiking_ [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection]12:02
-!- wiking_ [~wiking@vpnb172.ugent.be] has joined #shogun12:02
-!- wiking_ [~wiking@vpnb172.ugent.be] has quit [Changing host]12:02
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun12:02
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 245 seconds]12:04
-!- wiking_ is now known as wiking12:04
-!- PhilTillet [~Philippe@vir78-1-82-232-38-145.fbx.proxad.net] has joined #shogun12:21
PSmitAaltoHey All12:22
n4nd0PSmitAalto: hey!12:22
PSmitAaltoI am trying to get a MKLMultiClass to work, but for now I'm getting quite bad results12:22
PSmitAaltoWhat is a good strategy for setting the right epsilon, mkl_epsilon and mkl_norm12:23
n4nd0PSmitAalto: ok, I have not used myself so probably I am not the best to give suggestions but12:23
n4nd0have you tried using model selection for that?12:23
blackburnPSmitAalto: in which means it is bad?12:23
PSmitAaltoNo I haven't tried to use model selection.12:24
PSmitAaltoThe results are worse than training an svm on a single feature12:24
blackburnI see. however I am not a mkl expert as well12:25
blackburnmy suggestion is to ask about that using mailing list12:25
blackburnIIRC alex binder reads it and could answer12:26
PSmitAaltoOk, I'll do that12:26
PSmitAaltoThanks anyway!12:26
blackburnyou may also glance over mkl paper12:27
blackburnhttp://jmlr.csail.mit.edu/papers/volume12/kloft11a/kloft11a.pdf12:28
PSmitAaltoThat is even better! I'll go through that one first12:28
PhilTillethello everybody12:29
blackburnhey12:37
wiking:>12:38
wikinglet's see if i'm gonna have the same troubles with mkl12:38
blackburnyeah. @PSmitAalto wiking is on the same way as you trying some mkl stuff12:40
wikingPSmitAalto: i'll be having results within 20 mins12:42
blackburngenix: here?12:43
PSmitAaltoOk, nice12:43
genixblackburn, yep12:54
genixhi all12:54
blackburngenix: small task12:55
blackburngenix: do you know how does sg_progress stuff work?12:55
-!- genix [~gsomix@188.168.13.216] has quit [Ping timeout: 260 seconds]12:59
blackburnoh he is afraid of sg_progress probably *lol*12:59
n4nd0:D13:00
-!- genix [~gsomix@85.26.165.173] has joined #shogun13:00
blackburnn4nd0: once you are finished with covertree I can suggest you some framework stuff as well (some simple memory measurement system)13:00
blackburnmemory usage*13:01
genixblackburn, what up?13:01
genix*s13:01
-!- genix is now known as gsomix13:01
blackburngsomix: I was asking you if you are acknowledged how SG_PROGRESS works13:01
blackburnare you?13:01
gsomixblackburn, no, I am not13:03
blackburnhttps://github.com/shogun-toolbox/shogun/blob/master/src/shogun/classifier/ConjugateIndex.cpp13:03
blackburnline 13413:03
blackburnit is used to indicate the reached progress13:03
blackburnto start with you could add similar stuff there:13:04
blackburnhttps://github.com/shogun-toolbox/shogun/blob/master/src/shogun/classifier/GaussianNaiveBayes.cpp13:04
gsomixblackburn, hm, ok.13:06
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds]13:07
gsomixblackburn, how can I check the current progress?13:07
gsomixah, stop13:08
gsomixI figured out13:08
blackburngsomix: http://pastebin.com/ifFsCG5c13:09
blackburnuse something like that13:09
gsomixok13:09
blackburnhowever that would be too fast13:09
blackburnincrease N and number of classes13:09
blackburncurrently it would be 50% and 100% only13:10
blackburngsomix: a little more ambiguous is to progress some svm solvers13:14
-!- genix [~gsomix@188.168.14.52] has joined #shogun13:19
-!- gsomix [~gsomix@85.26.165.173] has quit [Ping timeout: 246 seconds]13:19
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun13:19
n4nd0blackburn: hey! what is that framework for memory measurement you were talking about?13:31
-!- pluskid [~chatzilla@111.120.9.128] has joined #shogun13:31
blackburnn4nd0: I think that would be nice to have some system that measures memory usage13:31
blackburni.e. you could get how much time classifier ate while training13:32
blackburnor applying13:32
blackburnerr no time13:32
blackburnmemory13:32
blackburnpluskid: want simple task?13:32
n4nd0blackburn: haha you are with ideas today :)13:32
blackburnyes13:32
blackburn*nothing to do at job*13:32
n4nd0blackburn: :D do you have something particular in mind / somewhere you have seen something similar before?13:33
blackburnnot really13:33
blackburnok the idea is rather simple13:33
blackburnbut could be extended seriously13:33
blackburnn4nd0: simplest way is to add SG_MALLOC_N or so13:34
blackburnit could store name of block13:34
blackburnhowever I don't really like it13:34
blackburnbetter would be to add new class13:35
blackburnand each malloc should add memory usage info to it13:35
blackburnonce you have some get_memory_stats method in SGObject you can check all you need13:36
n4nd0blackburn: I think I understand the idea13:36
pluskidblackburn: what's that?13:37
blackburnn4nd0: however better try to finish covertree - would be useful13:37
blackburnpluskid: https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/classifier/GaussianNaiveBayes.cpp currently it uses whole training matrix to train13:37
blackburnshouldn't be so13:37
pluskidwhat's the difference between a Gaussian NB and an ordinary NB?13:38
pluskidOK13:39
pluskidI find the doc13:39
pluskidI'll look at it13:39
n4nd0blackburn: yes, I am going to devote some work to cover tree13:39
n4nd0blackburn: but I think it will take me some effort13:40
blackburnpluskid: it assumes that features are continuous and gaussian distributed13:40
n4nd0the code is a total mess to be honest13:40
blackburnpluskid: no way to use NB with continuous variables you know13:40
blackburn(if you don't know distribution)13:40
pluskidyeah13:40
blackburnn4nd0: yeah but it is the reality - svm solvers are not much better I think13:41
n4nd0blackburn: it is good to get training in this aspect too :)13:43
n4nd0and even if I complain, this must be better than starting from scratch13:43
blackburnhah sure13:43
blackburnwriting covertree from scratch would be a headache13:44
pluskidblackburn: you mean should not get the whole matrix in a whole, but use get_computed_dot_feature_vector one by one?13:51
blackburnpluskid: exactly13:51
pluskidok13:51
n4nd0blackburn: don't you think that these cases of computing the whole matrix or re-computing features13:54
n4nd0blackburn: something similar to SPE with the distance matrix13:55
n4nd0blackburn: should be done in both ways?13:55
blackburnn4nd0: yes and again my code13:55
blackburnah13:55
blackburnthere - not really13:55
n4nd0blackburn: why?13:55
blackburnit is not really faster to compute whole matrix13:55
n4nd0blackburn: do you think so?13:55
blackburnyes - that should be the same13:55
n4nd0mmm I thought it should be faster13:56
blackburnin case of simple features and available feature matrix13:56
blackburnno difference to get feature vector or to get feature matrix13:56
blackburnjust pointer right?13:56
blackburnn4nd0: btw that gnb is really fast13:58
blackburnyou could try on that data I gave to you13:59
n4nd0blackburn: Gaussian Naive Bayes?13:59
blackburnn4nd0: yes13:59
blackburnthat thing was written by me when I was young^W applying for gsoc 2011 btw14:00
n4nd0blackburn: my point with the feature / distance matrices is that if we don't pre-compute them, some things may be computed several times14:00
n4nd0blackburn: haha are you old already :P14:01
blackburnn4nd0: well yes but it is cost of doing some large-scale applicable stuff14:01
-!- genix is now known as gsomix14:01
n4nd0blackburn: yes, but that is why I think that it could be better to offer both possibilities14:02
n4nd0in some application it can be interesting to compute the whole matrix, in some others not14:02
blackburnn4nd0: anyway in the stuff you did it is ok to use callback14:03
blackburnbecause callback can access precomputed stuff as well14:03
n4nd0e.g. for using the cover tree, pluskid measured that there where lot of acceses to distances14:03
blackburnerr not callback - virtual function14:03
blackburncode would be messy if you would support matrices as well14:04
n4nd0we can make in transparent using a class and overloading '()'14:06
n4nd0don't you think so?14:06
n4nd0I think that override may be a better word here14:07
blackburnwhy don't you like ->distance(i,j)?14:07
n4nd0I like it14:08
n4nd0that's why I mean the code would not turn to be a mess14:08
n4nd0the difference would be that actually14:08
blackburnI mean it already supports precomputing14:08
n4nd0distance(i,j) would do inside14:08
n4nd0distance_mat[i + N*j] if the matrix is precomputed14:09
n4nd0or compute the distance between features i and j otherwise14:09
blackburnn4nd0: yes so what you want to change?14:17
n4nd0blackburn: add and option as a class member that allows to do this selection14:18
n4nd0blackburn: what do you think?14:18
blackburnn4nd0: it is manageable already with customdistance/kernel14:18
blackburnlook14:18
blackburnhttps://github.com/lisitsyn/shogun/blob/libedrt/src/shogun/converter/LocallyLinearEmbedding.cpp14:19
blackburnline 17614:19
blackburnhere I added that stuff with no option but it is rather easy to change14:20
n4nd0ok14:21
blackburnoption should be there but it should not be handled with anything but Custom*14:21
blackburnn4nd0: so no need to modify distance stuff14:21
n4nd0by CustomKernel?14:21
blackburnkernel or distance14:22
blackburnhowever there should be some caching I think14:22
blackburni.e. it would be nice to have function that caches distances to neighbors14:23
blackburnI will add one later I think14:23
-!- pluskid [~chatzilla@111.120.9.128] has quit [Quit: ChatZilla 0.9.88.2 [Firefox 11.0/20120314124128]]14:26
-!- pluskid [~chatzilla@173.254.214.60] has joined #shogun14:27
-!- PSmitAalto [82e9b263@gateway/web/freenode/ip.130.233.178.99] has quit [Quit: Page closed]14:39
n4nd0blackburn: did you try JL covertree?14:41
blackburnn4nd0: no I thought you will :D14:42
n4nd0blackburn: haha yes, I am doing it14:42
pluskidhere comes the pull request14:42
n4nd0blackburn: it was to compare opinions14:42
n4nd0blackburn: I am surprised about the time it takes to construct it14:43
n4nd0blackburn: here it seems that it is constructed in 0.178523 seconds using 37749 vectors14:43
blackburnpluskid: I will be able to merge it within 1-2hrs14:43
blackburnn4nd0: wow nice14:44
n4nd0blackburn: that is quite different from the time of the other day with the other tree14:44
blackburnnot 423432423 seconds dncrane's did14:44
pluskidblackburn: ok, I'll back to LARS then14:44
pluskidtell me if there's any problems14:44
blackburnsure14:45
blackburnpluskid: thanks!14:46
pluskidu r wel14:46
blackburnI think I need to update news today14:46
pluskidso?14:47
blackburnto include all your (gsocers) contributions14:48
pluskidon the homepage?14:48
blackburnno, in repo14:48
pluskiddidn't see it14:49
pluskidis there a NEWS file in the repo?14:49
blackburnhttps://github.com/shogun-toolbox/shogun/blob/master/src/NEWS14:50
pluskidoh, got it!14:50
-!- PhilTillet [~Philippe@vir78-1-82-232-38-145.fbx.proxad.net] has quit [Remote host closed the connection]14:51
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has quit [Quit: Page closed]15:02
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 245 seconds]15:10
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun15:11
n4nd0shogun-buildbot: did you get tired of IRC?15:13
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 245 seconds]15:20
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun15:22
-!- PhilTillet [~android@92.90.16.70] has joined #shogun15:57
-!- PhilTillet [~android@92.90.16.70] has quit [Ping timeout: 260 seconds]16:01
@sonney2kn4nd0, hmmhh we have memory tracing already...16:06
@sonney2kcan be enabled with --enable-trace-mallocs IIRC16:06
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Read error: Operation timed out]16:08
@sonney2kwiking, whats up?16:10
wikingsonney2k: i wanted to ask some stuff about mkl16:13
@sonney2kask16:13
@sonney2k(I hate mkl btw)16:13
wikingahahhaha cool16:13
@sonney2kclose to useless crap :D16:14
wikingyour name is on the paper :)))16:14
wikingsince now i have like really different kind of features that's why i thought that it'd be great to use it16:14
wikingyou reckon it's really wouldn't make a much of a difference?16:15
@sonney2kwiking, well you can even use different features without mkl16:15
@sonney2kjust use several kernels16:15
wikingwell yeah i've tried that16:15
@sonney2kwith shogun16:15
@sonney2k(CombinedKernel)16:15
wikingmmm16:15
wikingbut can i just chuck in a combined kernel16:15
wikingto any solver?16:15
wikingsince now i'm just trying to follow the example from the tutorials16:16
wikingi've following: ../examples/documented/python_modular/mkl_multiclass_modular.py16:17
gsomixsonney2k, moin. how are you?16:29
@sonney2kwiking, any kernel machine yes16:31
@sonney2kwiking, the mkl stuff is only to learn the weights in front of the linear combination16:31
wikingmmm but i guess the only difference then is that it doesn't compute the weighting i guess...16:31
@sonney2kyes16:32
@sonney2kand actually the weighting can make things worse - or better - depending on your problem :D16:32
@sonney2kgsomix, tired - watched demos too long tonight16:32
wikingoh great16:32
wikingthen first i just check out a simple combined kernel stuff16:32
@sonney2kmy experience is that mkl_norm=2 can help16:34
@sonney2k(a tiny bit)16:34
wikingehehhe ok will give it a go as well16:34
wikingit's just that i think a combined kernel must give me a better result16:34
@sonney2kit usually is better to work on actual features, do fine grained model selection etc16:34
@sonney2kwiking, usually yes16:35
wikingthan concatenating each feature and using one type of kernel on it16:35
@sonney2kbut you have to normalize data / kernels16:35
wikingsince the features are really different in nature...16:35
@sonney2kand that itself can be tricky :D16:35
@sonney2kand mkl won't help you with this either...16:35
wikinghehehehe16:35
wikingok let's see how it works out16:35
wikingi'm just still having fun with loading the features into shogun in a good way16:36
wikingsince they are in various formats :)16:36
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds]16:39
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun16:42
-!- V[i]ctor [~victor@host-176-100-246-254.masterbit.su] has quit [Quit: Leaving.]16:55
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun17:04
-!- PhilTillet [~Philippe@157.159.42.154] has joined #shogun17:06
PhilTilletHi hi17:06
n4nd0sonney2k: hey! how is it going?17:08
n4nd0sonney2k: so we have already memory management implemented in shogun?17:09
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has joined #shogun17:11
-!- V[i]ctor [~victor@host-176-100-246-254.masterbit.su] has joined #shogun17:19
wikingdoh i'm doing something wrong17:27
wiking[ERROR] Index out of Range: idx_a=0/0 idx_b=0/017:27
wikingoh i know what :>17:28
wikingsonney2k: still around? i just wonder for example in the case of ../examples/documented/python_modular/kernel_combined_custom_poly_modular.py17:41
wikingwhy do you create twice the CombinedKernel....17:41
wikingwouldn't it be enough to create it once...? and then just use different kernel.init calls?17:42
-!- pluskid [~chatzilla@173.254.214.60] has quit [Quit: ChatZilla 0.9.88.2 [Firefox 11.0/20120314124128]]17:46
-!- PhilTillet [~Philippe@157.159.42.154] has quit [Remote host closed the connection]18:20
wikingis there a function for merging/concatenating simple features?18:21
-!- PhilTillet [~Philippe@157.159.42.154] has joined #shogun18:43
@sonney2kn4nd0, well one can list blocks of memory that are allocated19:12
@sonney2kn4nd0, so everything that SG_MALLOC etc allocates can be traced when ./configure --enable-trace-mallocs is on19:12
@sonney2kwiking, use CCombinedDotFeatures19:12
@sonney2kthen you can mix dense, sparse, xxx features19:12
@sonney2kand use any e.g. linear classifier with it19:13
@sonney2kor if you want to mix any kind of feature types, CCombinedFeatures19:13
@sonney2kwiking, yes one kernel is sufficient19:13
PhilTilletsonney2k, I have a question :D Why isn't the Gaussian Kernel implemented as a CDistanceKernel?19:14
wikingsonney2k: heheh yeah i've tried with one kernel and worked... i'm trying now combined kernel but i want some of my features to be handled by one type of kernel, that's why i want to concatenate some features19:14
wikingah i see append_feature_obj should work for CCombinedDotFeatures19:15
-!- harshit_ [~harshit@182.64.221.94] has joined #shogun19:30
wikingmmm i've found NormOne preprocessor but is there NormTwo, or NormN for that matter ? as i cannot find it :(19:30
harshit_n4nd0: hola ! , wassup .!19:32
gsomixhttp://piccy.info/view3/2869209/689d46487f8b6162bd1c6c1534b630f9/ Samara city... =___=19:33
n4nd0harshit_: hey! how is it going?19:38
wikingn4nd0: yo19:40
V[i]ctorhi!19:43
@sonney2kwiking, no19:44
wikingsonney2k: mmm i guess then it's time to have one :)19:45
@sonney2kheh19:45
wikingcan it go as preprocessor ? just like in case of NormOne ?19:45
@sonney2kwiking, look at what normone does19:45
wikingyep i've checked19:45
@sonney2knot sure what you want to do...19:46
wikingwell what if i want norm-219:46
n4nd0wiking: hey19:46
wikingso the euclidean norm19:46
wikingor any p-norm for that matter... where p>=119:48
wikingthat's why i thought having a NormP preprocessor would be good... which by defaults calculates the 2-normalized vectors of a feature matrix...19:49
harshit_n4nd0: good here :)19:52
harshit_n4nd0: Do you have any experience with semi supervised algos ?19:52
n4nd0harshit_: I have studied some EM stuff but I think that fits better in unsupervised19:53
n4nd0harshit_: at least that's the way I remember I studied it19:54
harshit_yeah EM is mostly unsupervised, have a look at the mail I sent19:54
harshit_If you have any idea about those implementations, do tell me19:55
CIA-64shogun: pluskid master * r1179258 / (3 files in 2 dirs): GNB: replace SGVector with SGMatrix to make the code easier to read. - http://git.io/hoqSCA19:59
CIA-64shogun: pluskid master * rf87cd67 / src/shogun/classifier/GaussianNaiveBayes.cpp : Fetch one feature vector at a time instead of the whole feature matrix for potential large-scale training. - http://git.io/RZ68Fw19:59
CIA-64shogun: pluskid master * rcaba5ca / examples/undocumented/python_modular/classifier_gaussiannaivebayes_modular.py : Remove debug code. - http://git.io/DxYHBg19:59
-!- blackburn [~qdrgsm@83.234.54.186] has joined #shogun20:00
blackburnmighty blackburn here lol20:03
PhilTillethi :)20:10
blackburnhi20:10
PhilTilletdid you sleep well?20:11
blackburnoh not really20:11
blackburn:D20:11
PhilTillet=D20:11
-!- harshit_ [~harshit@182.64.221.94] has quit [Quit: Leaving]20:12
gsomixblackburn, yo.20:17
blackburnhi-hi20:20
blackburntwo days left for announcement of slots probably20:21
PhilTilletyes..20:22
@sonney2kwiking, NormOne ensures actually ||x||_2 = 120:25
blackburnSumOne is for L120:26
wikingsonney2k: yeah i've realized, and i wondered why is it called NormOne when it's norm-2 :P20:26
blackburnhowever yes I agree it should be generalized20:26
wikingok i'll make a fast one now20:26
wikingas i need it anyways20:26
@sonney2kwell the norm of each vector is one :)20:26
blackburnwiking: kind of norm->one20:26
@sonney2kafter processing .20:26
blackburnwiking: could you please also specialize that for p=2 with cblas_dnrm2?20:31
wiking?20:31
wikingexplain more :)20:32
@sonney2kspeedup the code!20:32
blackburnI know it is more stable in means of underflow and overflow20:32
wikingahhahahaha20:32
blackburnthan dot of two vectors20:32
blackburnwiking: yeah seriously20:33
wikingdoin' it20:33
blackburndnrm2 is better than ddot20:33
wikingjust a sec20:33
blackburn[1] http://fseoane.net/blog/2011/computing-the-vector-norm/20:34
blackburn:D20:34
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has quit [Quit: ( www.nnscript.com :: NoNameScript 4.22 :: www.esnation.com )]20:42
@sonney2kblackburn, I think we should then update CMath::twonorm too :)21:02
blackburnsonney2k: we have CMath::twonorm?21:02
blackburnlol21:02
@sonney2khehe21:02
wikingok21:05
wikingready21:05
wikingoups almost... 1.0 norm is missing... :)21:05
n4nd0blackburn: hey! so it looks that sonney2k pointed out a solution that already exists for the memory allocation framework you suggested21:10
n4nd0blackburn: do you think there is anything else that you were thinking of but not covered there?21:10
blackburnn4nd0: yes only small patch should be there probably21:10
@sonney2kblackburn, what is missing?21:10
blackburnyes, we can also store relation to object21:10
blackburnI mean now we can't check how memory libsvm ate21:11
@sonney2konly in total yes / no annotations or so21:11
@sonney2kblackburn, http://sonnenburgs.de/soeren/media/images/gsoc2012-proposals-stats.png21:12
wikingsonney2k blackburn here's the implementation of p-norm: https://github.com/vigsterkr/shogun/commit/01c13468de45eec7616ecbbdb7ad1823fd02964621:12
blackburnoh how many links ;)21:12
wikingif you have any comments let me know otherwise i do the things for modular interface and i'll submit a pull request21:12
blackburnsonney2k: whoa a few just before deadline21:13
blackburnwiking: please use_that_naming21:14
wikingah shit ok forgot :))21:14
n4nd0sonney2k: nice graph :)21:15
wikingblackburn: ok pushed...21:16
blackburnokay I have to go for a while21:16
blackburnwiking: Soeren will merge I think ;)21:17
wikingblackburn: is the lapack handling ok?21:17
wikingi suppose so just want to double check21:17
blackburnyes21:17
wikingok21:17
wikingthen i'll do the modular thingy and then i'll do a commit + pull request21:17
wikingand then it's up to you guys if you want to get rid of NormOne and SumOne :P21:18
wikingpull request sent...21:23
shogun-buildbotbuild #462 of python_modular is complete: Failure [failed test_1]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/python_modular/builds/462  blamelist: pluskid@gmail.com21:25
@sonney2kOK I blogged about this stats: http://sonnenburgs.de/soeren/category/blog/#shogun-student-applications-statistics-google-summ21:43
@sonney2kwiking, we cannot get rid of these things completely (for compatibility)21:44
wikingokey no worries21:44
wikingwell what can be done is that simply do an inheritance from this class to NormOne and SumOne21:46
gsomixgood night, guys21:46
PhilTilletgood night gsomix21:46
wikingmmmm mkl is quite slow :(21:46
@sonney2kwiking, as I said don't do it :)21:48
@sonney2kwiking, what are you doing btw?21:48
@sonney2kgsomix, have a good sleep21:48
wikingsonney2k: well doing a combined kernel learning21:48
wikingmmm ok i'll do the changes soon21:49
@sonney2kwiking, with mkl or just with svm?21:50
wikingwell both21:50
wikingi'm trying with gnmpsvm21:50
@sonney2k?21:50
@sonney2kso just svm21:51
wikingit works nice21:51
wikingbut i thought i give it a go with mkl21:51
@sonney2khow big is the kernel cache?21:51
@sonney2khow big is the data21:51
wikingdefault21:51
shogun-buildbotbuild #456 of ruby_modular is complete: Failure [failed test_1]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/ruby_modular/builds/456  blamelist: pluskid@gmail.com21:51
@sonney2kwhich kernels?21:51
wikingJensenShannon + LINEAR21:51
@sonney2kwiking, did you use normone preproc ?21:51
wikingthe data... mmm it's about 1500 features21:51
@sonney2kerr kernel normalizer21:51
wikingsonney2k: haven't tried kernel normalizer21:52
@sonney2kwiking, it is sth you need ... I guess sqrtdiagkernelnormalizer is what you want :)21:52
@sonney2kwiking, 1500 examples or dims?21:52
wikingdims21:53
wikingok so what kernel normalizer should i use21:54
@sonney2kand #examples?21:54
@sonney2ksqrtdiagkernelnormalizer21:54
wikingand only on the the combinedkernel?21:54
wikingor on each subkernels?21:54
@sonney2kwiking, no for each submkernel21:54
wikingok21:54
wikinglet's see how that helps21:54
wikingwhy exactly sqrtdiagkernelnormalizer ?21:55
@sonney2ktrace(K)=121:55
wikingok adding them :)))21:56
wikingtrace(K) ?21:57
wikingnumber of examples are 1k21:58
blackburn??22:01
blackburnre22:01
blackburn:D22:01
wikingyo22:01
@sonney2kwiking, but then it is fast right? I mean worst case precompute the matrix and done22:02
wikingdooooh22:06
wikingmkl finished.... worst results than with a simple svm22:07
wikingsignificantly :(22:07
blackburnI think it is totally wrong idea :D22:07
wikingahhahahahaha22:07
blackburnI mean mkl (though I am not an expert right)22:07
wikingok i'm trying now the normalization of the sub kernels :)22:09
@sonney2kwiking, ahh and don't forget to adjust C22:10
@sonney2kanyways sleep time for me22:10
@sonney2kcu22:10
wikingsonney2k: where in mkl or svm? or both? :)22:10
@sonney2kmkl C use 0 always22:11
@sonney2konly svm22:11
wikingah ok maybe that was the problem :)))22:11
wikingi've used 1.022:11
wikingsonney2k gnite and thnx for the tips22:15
PhilTilletgnight sonney2k22:16
n4nd0sonney2k: good night22:19
blackburnoh I have to say this22:20
blackburnsonney2k: good night!!22:20
PhilTillet:p22:20
PhilTilletblackburn, do I still have some fixes to do in my nearest centroid pull request? :s22:21
blackburnah22:21
PhilTilletmaybe adding shrinking?22:21
blackburnwhat kind of shrinking? :)22:22
PhilTilletI have seen on google that there was a variant called nearest shrunk centroids22:22
PhilTilletit shrinks centroids towards zero22:22
PhilTilletshrunken* :p22:23
blackburnoh I don't know22:24
blackburnjust fix things I noted22:24
PhilTilletwell I did it22:26
PhilTilletdidn't you see the new commit? :p22:26
blackburnhttps://github.com/shogun-toolbox/shogun/pull/43322:27
blackburncan you?22:27
PhilTilletoh, seems like i made a mistake in my commit name22:27
PhilTilleti can see 69cca9222:27
PhilTillet* Fixed the example include and comments22:27
PhilTilletbut there are other stars in that commit with other things i fixed22:28
PhilTillet:p22:28
blackburnhmm right strange22:28
blackburnwhy comments are after that22:28
PhilTilletI don't know :o22:29
blackburnokay more commits then22:29
PhilTilletyes, actually i had many different commits, but rebased them into one22:29
PhilTilletprobably bad idea22:29
PhilTilletthat's probably why it appears before too22:31
PhilTilletwon't do that again :D22:31
blackburnPhilTillet: okay commented some stuff22:34
PhilTilletsaw that, I don't understand the free thing, target is just a pointer, not allocated22:35
blackburnactually you should rather use DotFeatures here I think22:37
PhilTilletah, you are talking about float64_t* current ?22:38
blackburnno22:38
blackburnokay actually it is ok for simple features22:38
blackburnhowever there is a do_free stuff you know22:38
blackburnit is supposed for such things22:38
blackburnworkflow is to get and call free with do_free22:39
blackburnit sets do_free automagically and frees according to its value22:39
PhilTilletokay got it22:46
PhilTilletand concerning the other comment, 1.0/(float64_t)num_per_class[i], if I got it right I should replace with 1.0/((float64_t)num_per_class[i]-1) right?22:49
blackburnyeah22:49
blackburnbut consider case with N=122:50
PhilTilletyes22:50
PhilTilletthat's what i was going to point out22:50
PhilTillet:D22:50
PhilTilletunderstood22:50
PhilTilletI just check it still executes fine and I push22:55
blackburnok22:55
-!- harshit_ [~harshit@182.64.221.94] has joined #shogun23:03
shogun-buildbotbuild #463 of python_modular is complete: Success [build successful]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/python_modular/builds/46323:04
harshit_blackburn: hi,Got your mail.! what do you mean by " What you would need is to make use of its semi-supervised capabilities there."23:04
harshit_does that mean that I need to create an example of it ?23:05
blackburnharshit_: I mean we currently do not provide abilities to train semi-supervised there23:05
harshit_ohk23:05
blackburnso you would need to create some class for that23:06
harshit_I need to go through SVMlin implementation in shogun, Before saying anything else !23:06
harshit_blackburn: A wrapper kind of class ?23:07
blackburnyes23:07
harshit_got it23:07
harshit_thanks,..23:08
-!- genix [~gsomix@188.168.4.3] has joined #shogun23:09
blackburnn4nd0: ah btw! are you here?23:09
n4nd0blackburn: yeah, tell me23:09
-!- gsomix [~gsomix@188.168.14.52] has quit [Ping timeout: 260 seconds]23:09
blackburnn4nd0: heiko added subset of subset of subset of subset support23:09
n4nd0haha23:09
blackburnso kernel multiclass stuff should work now23:09
n4nd0cool23:09
n4nd0I should try it23:10
shogun-buildbotbuild #457 of ruby_modular is complete: Success [build successful]  Build details are at http://www.shogun-toolbox.org/buildbot/builders/ruby_modular/builds/45723:10
n4nd0blackburn: what are you working on lately?23:11
blackburnI'd say nothing :D23:12
blackburnuniversity kills my time this week23:12
PhilTilletblackburn, I updated my pull request :)23:12
n4nd0university stuff then23:12
blackburnn4nd0: sad true23:13
blackburntruth I mean lol23:13
CIA-64shogun: Sergey Lisitsyn master * r7b3b7bc / src/NEWS : Updated NEWS - http://git.io/nGKAMQ23:14
n4nd0blackburn: got you :D23:14
blackburnn4nd0: damn heiko did not add add_subset thing to labels23:18
n4nd0blackburn: :O23:19
blackburnonly to features23:19
blackburnnot damn heiko :D23:19
blackburndamn. heiko23:19
n4nd0lol23:20
blackburnah I'll finish that later23:22
n4nd0ok23:22
blackburndamn it is getting warm here23:22
blackburnwas -5 week before and +1523:22
blackburnalready23:22
PhilTillethehe23:24
PhilTilletRussia is not the warmest country ever :p23:24
n4nd0oh +1523:26
n4nd0we don't have that temperature here yet23:26
PhilTilletSpain is very warm on summer though :p23:27
PhilTilletI used to go near Barcelona23:27
blackburnalmost no snow there left23:27
blackburnPhilTillet: I guess he is in Sweden ;)23:28
PhilTilletooooh23:28
blackburnokay see you tomorrow guys23:28
PhilTilletsee you tomorrow :)23:28
-!- blackburn [~qdrgsm@83.234.54.186] has quit [Ping timeout: 248 seconds]23:33
n4nd0good night23:33
-!- PhilTillet [~Philippe@157.159.42.154] has quit [Quit: Leaving]23:52
--- Log closed Tue Apr 10 00:00:19 2012

Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!