IRC logs of #shogun for Friday, 2012-03-09

--- Log opened Fri Mar 09 00:00:19 2012
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]00:04
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun00:14
-!- vikram360 [~vikram360@117.192.186.135] has quit [Ping timeout: 252 seconds]03:24
@sonney2kblackburn - lets discuss03:35
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]04:40
-!- vikram360 [~vikram360@117.192.186.135] has joined #shogun06:59
-!- cronor [~cronor@e178170108.adsl.alicedsl.de] has quit [Quit: cronor]07:11
-!- cronor [~cronor@e178170108.adsl.alicedsl.de] has joined #shogun07:12
-!- cronor [~cronor@e178170108.adsl.alicedsl.de] has quit [Ping timeout: 246 seconds]07:16
-!- cronor [~cronor@e178170108.adsl.alicedsl.de] has joined #shogun07:46
-!- cronor [~cronor@e178170108.adsl.alicedsl.de] has quit [Remote host closed the connection]08:18
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun08:37
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun09:10
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 244 seconds]09:14
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]09:16
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun09:32
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun09:38
-!- Guest93322 [~rohit@14.139.82.6] has joined #shogun09:46
-!- Guest93322 [~rohit@14.139.82.6] has quit [Client Quit]09:47
CIA-64shogun: Sergey Lisitsyn master * rc25cd07 / src/shogun/classifier/svm/SVMOcas.cpp : Added maxtraintime initialization for ocas svm - http://git.io/YP8vmw09:50
-!- blackburn [~qdrgsm@109.226.105.25] has joined #shogun09:51
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 240 seconds]10:01
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]10:17
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun10:39
CIA-64shogun: Soeren Sonnenburg master * r0d72eb6 / (2 files): if max_train_time == 0 - disable time limit for ocas - http://git.io/I86ydA10:47
blackburnsonne|work: mc ocas scales O(K^2) in means of number of classes K :(10:53
sonne|workI told you there is room for improvement in the MC area10:53
sonne|work(the whole field of machine learning!)10:53
sonne|workbtw KNN scales better ;-)10:54
blackburnsonne|work: I was expecting ocas is better than liblinear10:54
blackburnbut training is 50 times slower10:54
blackburnnoway10:54
sonne|workhow did you set epsilon?10:55
sonne|work1e-2?10:55
blackburnyes10:55
sonne|workC?10:56
blackburnsonne|work: if I got better accuracy I would keep it, but liblinear is fast and gets better accuracy10:56
blackburnwell same as it was for liblinear10:56
sonne|workwell results may differ then10:56
sonne|workliblinear will not optimize as accurately10:56
blackburnisn't it a crammer-singer here as well?10:57
blackburnhere = in ocas10:57
sonne|workcould be a smaller C is better for ocas because of this10:57
sonne|workyes sure10:57
sonne|workbut different method10:57
sonne|worknumerically different!10:57
sonne|workif you do early stopping in one method results will differ for a very precisely optimizing method10:58
blackburnsonne|work: well varying C can't slow down optimization so much?10:58
sonne|worksure it can10:58
blackburnin consistent ranges like [1,100]10:58
sonne|work100 will take more time than all the others together10:59
blackburnok I'll try 1e-5 C10:59
blackburnand check train speed10:59
sonne|work1 iteration :)10:59
sonne|workw=010:59
blackburnsonne|work: why did you make another if there?10:59
sonne|workblackburn: regarding structure11:00
blackburnI'll change to one, and add MaxTime>0 everywhere11:00
sonne|workwhy not have a separate shogun/multicalls folder?11:00
blackburnsonne|work: what to place here?11:00
sonne|workthe multiclass methods11:01
sonne|workall of them11:01
blackburnsvms, knn, etc11:01
blackburnin one place?11:01
sonne|workmulticlass liblinear / multiclass ocas/ knn / GMNP / ...11:01
sonne|workyes11:01
blackburnhmm works for me11:01
blackburnbut ugly cross-dependencies11:01
sonne|workand then all your new ecoc schemes11:01
sonne|workwell regression has the same issues11:01
blackburnoh ecoc, yeah I should start with it as well11:01
blackburnsonne|work: we need to extract libraries like libocas11:02
blackburnto avoid these cross-shit11:02
sonne|workIf you have an idea how this could be done say so11:02
blackburnshogun/libraries?11:02
sonne|workand?11:02
blackburnwhen no need to refer classifier in multiclass11:03
blackburncause all needed things are in shogun/libraries11:03
sonne|workit doesn't solve all problems - if we use svmlight - we also want svrlight which is derived from it11:03
blackburnno, with regression I see no solution11:04
sonne|workfor libocas/liblinear it would though11:04
blackburnbut at least for multiclass it would solve11:04
sonne|workmaybe even partially libsvm11:04
blackburnstoring libraries just near shogun wrappers is crappy for me anyway11:04
sonne|workwell they are all heavily modified - not really libraries anymore11:05
CIA-64shogun: Sergey Lisitsyn master * r7a9c865 / (2 files): Fixed max train time issue for MC OCAS - http://git.io/QoOZ5Q11:05
blackburnsonne|work: not so heavily :)11:06
sonne|workbest would be if we did not have to modify them11:06
sonne|workliblinear is pretty heavily modified / ocas too11:06
sonne|workthey all use the dotfeature concept of shogun11:06
blackburnit would be possible is developers were providing good reverse interfaces11:07
sonne|worklibsvm/ svmlight shogun's kernels etc11:07
blackburnsonne|work: ocas is not modified except some bugs we found11:07
blackburnbecause of good reverse interface11:07
blackburn&update_W things, etc11:07
sonne|workahh true I was working with vojtech on that - but IIRC then he changed to float32 internally or so?11:08
blackburnsonne|work: the only difference is double/float64_t11:08
-!- n4nd0 [~nando@n179-p53.kthopen.kth.se] has joined #shogun11:08
blackburnno float or float32 there11:08
sonne|workfor ocas we could get things fixed and maybe even link to the (external) library11:08
sonne|workanyway back to structure11:09
blackburnaha11:09
sonne|workshogun/classifier/multiclass is weird11:09
blackburnwhy?11:09
sonne|workI mean we could have shogun/classifier/oneclass11:09
sonne|workshogun/classifier/binary11:09
sonne|workshogun/classifier/multiclass11:09
blackburnand place knn everywhere :D11:09
sonne|workno11:09
sonne|workknn is clearly mc11:10
blackburnhmm11:10
blackburnn4nd0: I've got a task for you11:10
blackburn:D11:10
sonne|workproblem is we have an svm dir11:10
sonne|workclassifier/svm11:10
blackburnyes, svm folder looks ugly11:10
sonne|workwhat to do with that?11:10
blackburnsonne|work: probably -SVM suffix would be better11:10
sonne|workthere are 82 files in that dir11:11
blackburnbut renaming all classes is pretty painful for users11:11
n4nd0blackburn: tell me ;)11:11
blackburnsonne|work: I would suggest to do things gradually - i.e. extract multiclass, then extract domainadaptation, etc11:12
sonne|workblackburn: so we are stuck again11:12
sonne|workI guess it is all due to the way one does the sorting11:12
sonne|workby task type or by method name11:12
blackburnn4nd0: we have some covertree integrated there (fast data structure for neighbors searching) and KNN makes no use of it11:13
sonne|workblackburn: currently we have a mixture of both11:13
sonne|worke.g. kernel / distance ...11:13
n4nd0blackburn: so the idea is to change KNN so it uses it? should it be sth optional to the way it is currently done?11:14
blackburnn4nd0: may be yes, some option if covertree should be used,11:14
blackburnsonne|work: damn I got stucked as well11:14
n4nd0blackburn: ok11:15
sonne|workn4nd0: yeah a separate train method utilizing covertree11:15
blackburnn4nd0: check locallylinearembedding for example of covertree usage11:15
sonne|workblackburn: my suggestion for now is to create shogun/multiclass and to move everything in there11:16
n4nd0blackburn: ok, thanks11:16
blackburnsonne|work: agree11:16
sonne|workthis matches shogun/regression11:16
n4nd0blackburn: is this more important than QDA?11:16
sonne|workYMMV :)11:16
blackburnn4nd0: I don't know, it is up to you11:16
n4nd0blackburn: ook11:17
blackburnprobably yes, scikits treated we have slow KNN in their paper :D11:17
blackburnsonne|work: what about machines?11:17
n4nd0blackburn: oh did they say that about shogun? what paper is that?11:18
sonne|workblackburn: we had that discussion a year ago or so - machines are just the base-baselines for everything11:19
sonne|workat some point we wanted shogun/machines/classifier ...11:19
blackburnn4nd0: their jmlr paper http://www.jmlr.org/papers/volume12/pedregosa11a/pedregosa11a.pdf (well I showed my algos are faster than their in my paper that is under review)11:19
sonne|workbut multiple inheritance was required so we couldn't do it11:19
blackburnsonne|work: where to place multiclass machines?11:19
sonne|workin shogun/machines/11:19
blackburnhmm ok11:20
sonne|workand the real multiclass impl. in shogun/multiclass11:20
sonne|workas I expect several in there11:20
sonne|workI guess we won't have the next release before end of gsoc11:20
sonne|workso we have time to play around with things11:20
blackburnsonne|work: huh, why don't you want to release?11:21
sonne|worklooks like we are doing lots of feature *additions* currently11:21
sonne|workno stabilization work11:21
sonne|workwhich would focus on test suite etc11:22
blackburnprobably11:22
blackburnbut well we've got no regressions from 1.111:22
blackburnand even some fixes11:22
sonne|workso you propose to release just now (or soon) and then again after gsoc is over?11:23
blackburnsonne|work: makes sense for me11:23
blackburn6-7-8 months looks like pretty long release period11:24
blackburnno idea how many, I always had troubles with math :D11:24
sonne|workok then - please prepare as much as you can ChangeLog, check for regressions etc11:26
sonne|workno more feature commits before this release then11:26
blackburnsonne|work: yeah sure11:26
sonne|workblackburn: btw look at publicity11:27
sonne|workI added the idea there11:27
sonne|workif it makes sense to you I would put it live11:27
sonne|workbut maybe you have some addition ideas11:27
blackburnsonne|work: ok checking11:27
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]11:27
blackburnsonne|work: my custom swig object idea?11:29
blackburnI mean could go there as well11:29
sonne|workwhat is this?11:29
sonne|workahh yes sure11:29
blackburnsonne|work: I believe it could speed up development with shogun11:29
sonne|workjust make sure you know exactly what you want?11:29
blackburnsonne|work: do you understand what I suggest?11:30
sonne|workonly very few function calls can be marked 'to be overloaded'11:30
blackburnsonne|work: do we need to make it in base class?11:30
blackburnmark it *11:30
sonne|workI would always vote for a class that derives from the last known class11:31
blackburnsonne|work: hmm probably then we can make it optional11:31
sonne|worke.g. derive from OverloadableSimpleFeatures11:31
sonne|workand then change it to whatever11:31
blackburnyes, exactly what I suggest11:31
blackburncan we get it done?11:31
blackburnsonne|work: i.e. can we get it done w/o slowing down other features?11:32
sonne|workyes11:33
sonne|workbut what is essential there is that we have *separate* classes for that for which we enable the so called director feature in swig11:34
blackburnsonne|work: can you see impact of rapid development there?11:34
sonne|workrapid crashes you mean :D11:34
blackburnI just check whether I get hallucinated by this idea11:34
sonne|workI like the idea so add it but think about one toy case (e.g. implement QDA via scikits learn or whatever)11:35
sonne|workor orange11:35
sonne|workor XXX11:35
sonne|workahh better11:36
sonne|worka preprocessor that changes features from python side!11:36
blackburnyes11:36
blackburndid you get in love with this idea as me?11:36
blackburn:D11:36
blackburnsonne|work: features,labels,machines,preprocessors could be prototyped in python and then implemented in C++11:37
sonne|workwe have to see how it goes11:38
sonne|workbut in principle yes11:39
blackburnsonne|work: will you modify idea?11:42
sonne|workno, please do it11:42
blackburnok11:42
sonne|workmaybe you have some other ideas / enhancements for that11:43
sonne|workbut do you like it in principle?11:43
blackburnsonne|work: yes, good idea for me11:45
blackburnnot for me, but for me11:45
blackburnah nevermind11:45
blackburn:D11:45
sonne|workthese are all small tasks - but have huge impact on usability I would say11:45
blackburnok I'll modify it a bit later11:47
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun11:48
-!- n4nd0_ [~nando@n179-p53.kthopen.kth.se] has joined #shogun11:54
-!- n4nd0_ [~nando@n179-p53.kthopen.kth.se] has left #shogun []11:55
-!- blackburn [~qdrgsm@109.226.105.25] has quit [Ping timeout: 246 seconds]12:33
-!- cronor [~cronor@fb.ml.tu-berlin.de] has joined #shogun12:33
cronorHey!12:33
cronorI have a weird problem. I do cross validation and then train on the full dataset. This does not work (all alphas are 0). If I take the C chosen by cross validation and train on the full dataset in a separate matlab instance, it works fine. I already use sg('clean_features', 'TRAIN|TEST'). Is there a possibility to destroy the classifier object, too? I noticed it has the same memory address. I would like to reset shogun as far as possible.12:37
cronoror what does sg('clear') do?12:39
-!- vikram360 [~vikram360@117.192.186.135] has quit [Ping timeout: 245 seconds]13:00
-!- vikram360 [~vikram360@117.192.164.156] has joined #shogun13:02
-!- n4nd0 [~nando@n179-p53.kthopen.kth.se] has quit [Ping timeout: 244 seconds]14:01
-!- in3xes [~in3xes@180.149.49.227] has joined #shogun14:17
-!- in3xes [~in3xes@180.149.49.227] has quit [Read error: Operation timed out]14:22
sonne|workyeah sg('clear') should do that but it should work after xval too14:22
-!- in3xes [~in3xes@180.149.49.227] has joined #shogun14:37
-!- in3xes [~in3xes@180.149.49.227] has quit [Remote host closed the connection]14:57
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun15:34
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]15:47
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun16:23
-!- wiking [~wiking@huwico/staff/wiking] has quit [Client Quit]16:23
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun16:24
-!- sonne|work [~sonnenbu@194.78.35.195] has left #shogun []17:39
-!- sonne|work [~sonnenbu@194.78.35.195] has joined #shogun17:39
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking]17:42
-!- blackburn [~qdrgsm@109.226.105.25] has joined #shogun18:06
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun18:20
blackburnsonney2k: hmm I managed to fully reproduce same accuracy with liblinear and homogeneous map18:25
-!- sonne|work [~sonnenbu@194.78.35.195] has quit [Ping timeout: 244 seconds]18:45
-!- sonne|work [~sonnenbu@194.78.35.195] has joined #shogun19:01
-!- cronor [~cronor@fb.ml.tu-berlin.de] has quit [Quit: cronor]19:17
-!- puneet [~chatzilla@115.240.22.83] has joined #shogun19:20
-!- puneet [~chatzilla@115.240.22.83] has quit [Client Quit]19:21
-!- jekintrivedi [~jekin@27.4.212.232] has joined #shogun19:56
-!- jekintrivedi [~jekin@27.4.212.232] has quit [Client Quit]19:59
-!- blackburn [~qdrgsm@109.226.105.25] has quit [Ping timeout: 246 seconds]19:59
-!- blackburn [~qdrgsm@109.226.78.202] has joined #shogun19:59
CIA-64shogun: Sergey Lisitsyn master * rc3afe20 / (18 files in 6 dirs): Some rearrangements - http://git.io/1avbYg20:39
@sonney2kblackburn, what does that mean?20:46
blackburnsonney2k: same 97.30 with liblinear20:46
blackburnas with gmnp20:46
@sonney2kwith different C - or how did you manage to get that?20:47
@sonney2kbtw, do you do proper train/test splits?20:47
blackburnsonney2k: well I just did use same data20:47
blackburnsonney2k: train/test is already splitted20:47
@sonney2kok20:48
blackburnprobably my task was to get concepts how to classify it20:48
blackburnso I was trying to get best accuracy20:48
blackburnsonney2k: have you seen ECOC library code?20:49
blackburnsonney2k: I am afraid to abandon part of MC gsoc idea20:50
blackburncause not very much of work and I need it probably20:51
@sonney2kbut what did you change that you got 97.3 instead of 96%?20:52
blackburnsonney2k: features, I mixed up features :(20:53
blackburnthese was better and I was using exactly these ones with gmnp20:53
blackburnsonney2k: I have rearranged things, but did not extract multiclass yet20:55
@sonney2kso you should get same accuracy with mc ocas :)20:56
blackburnsonney2k: too slow20:56
@sonney2kyou just have a too tight stopping criterion - maybe...20:57
CIA-64shogun: Sergey Lisitsyn master * rba356f2 / src/NEWS : updated NEWS - http://git.io/ibcGsA20:57
blackburnprobably20:57
blackburnsonney2k: wow got 97.4120:58
blackburnwith better normalization20:58
@sonney2kok updated task idea20:59
@sonney2kmaybe we need more links to classes20:59
blackburnsonney2k: which idea?20:59
@sonney2kbut it should be rather good already20:59
@sonney2kSGVector for example20:59
@sonney2kand file swig_typemaps.i20:59
blackburnsonney2k: I am pretty enjoyed with ideas and application both20:59
@sonney2kyou mean happy?21:00
blackburnprobably21:00
@sonney2kyeah I think it is a nice mixture21:00
@sonney2khave to leave train21:00
@sonney2kcu21:00
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 260 seconds]21:13
@sonney2kblackburn, I added a couple of references for the first idea21:35
blackburnsonney2k: aha, I checked21:35
@sonney2kso we are done with that - now release and then next task is to weed through 100 student applications...21:53
blackburnsonney2k: fyi I got same or worse results with pyramidal hog22:23
blackburnprobably it is pretty useless22:23
blackburnno idea whether you're interested :)22:24
wikingmmmm22:32
wikingjust as an fyi22:33
wikingi'm starting my latent-svm branch on the weekend... had the chat with alex so now i'll try to first make the api and then a simple solver for it22:33
blackburnwiking: huh!@22:34
blackburnwiking: why do you hurry so damn much?22:36
blackburn;)22:36
wikinghave to22:36
wikinggot to have some results for new papers22:36
blackburnah22:37
blackburnwiking: we probably would need to discuss api22:50
wikingcool22:51
wikingnow?22:51
blackburnif you want to - why not :)22:51
wikingok22:52
wikingjust a sec22:53
blackburnwiking: ok22:53
wikinghere23:12
blackburnwiking: do you need some new features for l-svm btw?23:13
wikingyeah23:13
wikingwell it all depends how do you define the latent variable h23:14
wikingso you have x as usual23:14
wikingy as labeling23:14
wikingand h as the latent variable23:14
wikingin case of image23:14
wikingh can be a coordinate, x,y23:14
blackburnyeah, I know23:14
wikingah ok23:14
wikingso yeah23:14
wikingthat was a good question how you can read this in23:14
wikingfrom a file23:14
wikingi.e. how to store it23:14
blackburnwiking: what do you plan to use as features?23:15
blackburnsliding hog window?23:15
blackburnjust like in some papers before?23:15
wikingyeah in my case yes23:15
wikingbut it can be different for anybody23:15
wikingit's up to you how to define your features23:15
wikingso alex had the idea23:15
wikingthat on the api level23:15
blackburnwhat is the idea?23:15
wikingyou should supply the Psi(x, y, h)23:15
blackburndot product?23:16
blackburnthat's the thing I do not know ;)23:16
wikingit's the joint features23:18
wikingafaik23:18
blackburnah23:18
blackburnso probably we would have hog in there :)23:18
blackburnjust as example23:19
wikingyeah23:19
wikingand position23:19
wikingand labeling23:19
blackburnokay I see23:20
blackburnso you are going to implement new classes23:20
wikingyep23:20
blackburnsome LatentDotFeatures?23:20
blackburnand some LatentSVM23:20
blackburnany other classes?23:20
wikingLatentSVM23:20
wikingand then it has a train and infer function23:21
wikingquestion is the inheritance23:21
wikingwhich machine should i use?23:21
blackburnwiking: which solver will you use?23:21
blackburnand which model will you train :)23:21
wikingwell23:22
wikingthat's another question23:22
wikingi mean i would really like to use an SMO for this23:22
wikingand libqp would be my usual suspect23:23
wikingand i'd got first with a simple binary classification problem23:23
blackburnwiking: can it be linear?23:23
wikingwell currently nobody had defined yet kernelized version of latent svm23:23
blackburnonly linear?23:23
wikingyou have to do some funky shit with the lagrangians to have it kernelized23:23
wikingyep23:23
wikingafaik23:23
blackburnthen LinearMachine23:24
blackburnand MulticlassLinearMachine23:24
blackburnprobably you would have to separate these things23:24
blackburnjust as it is now23:24
blackburnI mean MulticlassLatentSVM and LatentSVM23:24
-!- cronor [~cronor@e178169201.adsl.alicedsl.de] has joined #shogun23:26
wikingok well then let's first go with LatentSVM : LinearMachine23:26
wikinglet me check what's the pure virtual functions in linearmachine23:27
blackburnwiking: train_machine23:27
blackburnand get_name23:27
blackburnnothing more IIRC23:28
wikingoh ok23:28
wikingand it either trains on the supplied features23:28
wikingvia the func arg23:28
blackburnwiking: train() just delegates to train_machine23:28
wikingor the one given via apply23:28
wikingor no23:29
blackburnand train(Features) just set features and call train()23:29
wikingapply is for getting the inference23:29
blackburnyes23:29
blackburnit works in train - apply way23:29
wikingi mean apply is for actually doing the classification based on the trained machine23:29
wikingright?23:29
blackburnyes23:29
wikingas it returns labels23:29
wikingok23:29
wikingthere we will have problem23:29
blackburnwhich?23:29
wikingsince the return value23:29
wikingin case of latent23:29
wikingit should not only be the labels23:30
wikingi.e. y23:30
blackburnnot a problem23:30
wikingbut h as well23:30
blackburnwell would be but easy to solve23:30
wikingok23:30
wikingenlighten23:30
blackburnokay I suggest to simply inherit from Labels23:30
wikingah ok23:30
wikingand do my own LatentLabels23:30
blackburnyes23:30
wikingor something like that?23:30
wikingok23:31
wikingi think i'll start writing the code now23:31
blackburnwiking: you probably would need to modify apply as well?23:31
wikingand do a commit then into my own repo23:31
wikingwhich apply? apply () or apply (Cfeatures) ?23:31
wikingor apply (int)23:31
blackburnmay be all23:32
wikingwell in case of apply (int) it'll be funky23:32
blackburnjust SG_NOTIMPLEMENTED it then23:32
wikingok23:32
blackburnwiking: okay so classes are23:32
blackburnhmm23:33
blackburnwait23:33
wikingkook listening23:33
wiking*okok23:33
blackburnwiking: may be you can do it as machine23:33
blackburnnot as svm only23:33
blackburnor..23:34
blackburnwiking: no need to, just do svm23:34
blackburn:)(23:34
wiking:>23:34
blackburnwiking: so, LinearLatentSVM23:34
blackburnLatentLabels23:34
blackburnLatentFeatures23:35
blackburnLatentDotFeatures23:35
wikingdotfeatures?23:35
blackburnderive your features from DotFeatures23:35
blackburnyeah it is an abstract class of features that are capable of + and dot23:35
wikingah ok23:35
wikingi see from doxy now23:35
wikingdo i need then LatentFeatures itself?23:36
blackburnI don't know :)23:36
blackburndo you need?23:36
wikingimho no23:36
wikingbut we'll see i guess :)))23:36
blackburnah yes23:36
blackburnjust some simplefeatures would work I guess?23:36
wikingwell i suppose so23:37
wikingi mean my only wonder now is23:37
wikinghow the fuck will the solver know23:37
wiking(pardon my french) about what is what23:37
wikingin the features23:37
wikingmeaning x, y and h23:37
blackburnI speak in this way always so no need to worry :)23:37
wikingcool23:38
blackburnhmm23:38
blackburnwell X is always features23:38
blackburnand y and h are in labels23:38
wikingpeople think usually in other parts of the world weird since they are not used to swearing only if they are mad :)))23:38
wikingyeah23:39
blackburnyou probably should listen to my russian when I'm in rush and unhappy23:39
blackburn:D23:39
wikingbut if you give like a feature where all these are in one23:39
wikingi mean x,y, h23:39
wikingthen how do yiou know what is what23:39
wikingespecially that what h actually is23:39
wikinghow long it is23:39
wikingin my case it's (x,y) coordinate23:39
blackburnwait, isn't it already known?23:39
wikingbut in other application23:39
wikingit can be whatever23:40
wikingi mean that's the plan here23:40
blackburnahhhhh23:40
wikingthat somehow be able to generalize this23:40
blackburnso you mean no way to generalize these LatentLabels?23:40
wikingnot to narrow down what kind of h you can have23:40
wikingi'm just now digging the history of the chat with alex23:41
blackburnwiking: some vector?23:41
blackburnh could be some d-dimensional vector23:41
blackburnprobably pretty general23:41
wikingI think that the meaning of the latent variable depends on the task: 1D position for genes23:42
wiking[9/03/12 2:30:47 PM] Sascha: 2D or 3D position for object detection23:42
wiking[9/03/12 2:31:07 PM] Sascha: whatever for language topics23:42
wiking[9/03/12 2:31:29 PM] Sascha: well, on thei nterface level you have to provide it at each iteration with a set of psi(x,y,h_opt)23:42
wiking[9/03/12 2:31:54 PM] Sascha: that would be my idea23:42
wikingI would say that the basic interface should work with the psi(x,y,h)23:43
wiking[9/03/12 2:27:47 PM] Sascha: and the argmax step should be a template23:43
wiking[9/03/12 2:28:15 PM] Sascha: that allows to implement the core and tackle the latent variable meaning later23:43
wikingfrom the logs23:43
blackburnwiking: well just provide some api to extend23:47
blackburnI mean LatentLabels <- 2DPositionLatentLabels or so is ok23:48
blackburnor better VectorLatentLabels or so23:48
blackburnwiking: http://cs9735.userapi.com/u649772/12134085/z_79bcd6ca.jpg23:51
blackburnresult of elections lol23:51
wiking:P23:56
blackburnok sleep time!23:57
blackburnwiking: good luck with code ;)23:58
blackburn??23:58
blackburncu23:58
-!- blackburn [~qdrgsm@109.226.78.202] has quit [Quit: Leaving.]23:58
--- Log closed Sat Mar 10 00:00:19 2012

Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!