--- Log opened Sun Aug 26 00:00:17 2012 | ||
CIA-52 | shogun: Sergey Lisitsyn master * r9429c0e / src/shogun/evaluation/CrossValidationMKLStorage.cpp : Added handling of multiclass MKL machine in MKL storage - http://git.io/xykkuA | 00:07 |
---|---|---|
yoo | blackburn: allright, going to test it | 00:10 |
CIA-52 | shogun: Sergey Lisitsyn master * rb35dbe8 / src/shogun/evaluation/CrossValidationPrintOutput.cpp : Added MKL multiclass handling in CV print output - http://git.io/3dGCzw | 00:11 |
blackburn | okay | 00:11 |
yoo | blackburn: could you explain me again why not using the Evaluation interfaces such as MulticlassAccuracy and MulticlassOVR ? | 00:13 |
blackburn | yoo: what's the difference? | 00:13 |
yoo | blackburn: yes thats it: it is exactly the same right ? | 00:14 |
blackburn | yeah, but it already have to do the same as MulticlassOVR | 00:14 |
blackburn | so I just put binary evaluations there | 00:14 |
blackburn | argh so we need a matrix to store accuracies too? | 00:15 |
yoo | ye | 00:15 |
yoo | isnt already here ? SGVector<float64_t> m_evaluations_results; | 00:16 |
blackburn | no it is for binary | 00:16 |
blackburn | F1/etc | 00:17 |
yoo | ah ok then we need multiclass acc (and confusion matrices) | 00:17 |
blackburn | oh gosh | 00:17 |
blackburn | confusion matrices? | 00:17 |
blackburn | :D | 00:17 |
yoo | thats why I ask | 00:17 |
blackburn | yeahhhh | 00:17 |
yoo | multiclassOVR and multiclass Acc | 00:17 |
yoo | permits to choose what you want | 00:17 |
yoo | and not compute everythg | 00:18 |
yoo | I liked the way modelselecion_output worked | 00:18 |
yoo | msout.add(mc_acc) or msout.add(mc_ovr(roc)) | 00:18 |
blackburn | confusion matrices have to be handled in special way still | 00:19 |
yoo | ie ? | 00:19 |
blackburn | you wanted to store it, right? | 00:19 |
yoo | yes | 00:19 |
blackburn | okay I will add it as option | 00:19 |
yoo | then we will have all the mc evaluation possibly stored | 00:20 |
blackburn | yes | 00:20 |
-!- gsomix [~gsomix@95.67.188.74] has quit [Ping timeout: 246 seconds] | 00:23 | |
blackburn | okay I will add three options | 00:23 |
blackburn | to constructor | 00:23 |
blackburn | ROC, PRC and confusion matrices | 00:24 |
blackburn | I am a little tired with that code so I will make it constant :) | 00:24 |
yoo | allright | 00:24 |
yoo | =) | 00:24 |
blackburn | I mean not possible to change after calling constructor | 00:24 |
blackburn | it requires special handling | 00:25 |
yoo | I begin to read easily the shogun code then I will be able to help with dirty task if you have some to give | 00:25 |
blackburn | oh I will think about some task to get into development | 00:26 |
yoo | but you can call several options in the constructor right ? | 00:26 |
yoo | and you check with dirty dynamic_cast ? | 00:26 |
blackburn | check what? | 00:26 |
yoo | ROC or PCR : the BinaryEval type | 00:26 |
yoo | the Multiclass Eval type I mean :p | 00:27 |
blackburn | where? | 00:27 |
yoo | nevermind | 00:27 |
blackburn | ah in dynamic object array? | 00:27 |
yoo | yep | 00:27 |
blackburn | well I assume nobody breaks something wrong into that | 00:28 |
blackburn | kind of ugly code that is | 00:30 |
yoo | [100%] Built target shogun | 00:34 |
yoo | =) | 00:34 |
yoo | well I have to work with cpack for installation, didnt have manage the interfaces .. but this will probably work in the future | 00:34 |
CIA-52 | shogun: Evgeniy Andreev master * rab15b5f / examples/undocumented/python_modular/features_director_dot_modular.py : added example for DirectorDotFeatures - http://git.io/7IXhQA | 00:54 |
CIA-52 | shogun: Evgeniy Andreev master * rab6a200 / (7 files in 3 dirs): protocols for CustomKernel - http://git.io/4Gjzow | 00:54 |
CIA-52 | shogun: Sergey Lisitsyn master * r65ad845 / (8 files in 4 dirs): Merge pull request #757 from gsomix/buffer_protocol - http://git.io/fJX7Eg | 00:54 |
CIA-52 | shogun: Sergey Lisitsyn master * r27c7dd2 / (2 files): Added confidence matrix computation abilities to CV MC storage and - http://git.io/SZotSg | 00:59 |
-!- blackburn [~blackburn@109.226.79.69] has quit [Quit: Leaving.] | 01:16 | |
-!- yoo [~eric@bdv75-2-87-91-8-203.dsl.sta.abo.bbox.fr] has quit [Quit: Ex-Chat] | 02:12 | |
shogun-buildbot_ | build #468 of deb3 - modular_interfaces is complete: Failure [failed test python_modular] Build details are at http://www.shogun-toolbox.org/buildbot/builders/deb3%20-%20modular_interfaces/builds/468 blamelist: Evgeniy Andreev <gsomix@gmail.com>, Sergey Lisitsyn <lisitsyn.s.o@gmail.com> | 02:42 |
shogun-buildbot_ | build #77 of nightly_default is complete: Failure [failed test] Build details are at http://www.shogun-toolbox.org/buildbot/builders/nightly_default/builds/77 | 03:49 |
-!- emrecelikten1 [~emre@trir-4d0d9665.pool.mediaWays.net] has quit [Ping timeout: 244 seconds] | 04:24 | |
-!- emrecelikten [~emre@trir-4d0d9a1f.pool.mediaWays.net] has joined #shogun | 04:37 | |
-!- in3xes [~in3xes@122.174.73.211] has quit [Ping timeout: 245 seconds] | 05:27 | |
-!- in3xes [~in3xes@122.174.73.211] has joined #shogun | 06:18 | |
-!- in3xes [~in3xes@122.174.73.211] has quit [Ping timeout: 264 seconds] | 06:24 | |
CIA-52 | shogun: Soeren Sonnenburg master * rf6aaf00 / examples/undocumented/python_modular/features_dense_protocols_modular.py : return none when numpy version etc is not sufficiently new - http://git.io/gb7F9A | 07:07 |
-!- sr___ [u5548@gateway/web/irccloud.com/x-ewmhdcvbknaunxsl] has quit [Quit: Connection closed for inactivity] | 07:25 | |
shogun-buildbot_ | build #469 of deb3 - modular_interfaces is complete: Failure [failed test python_modular] Build details are at http://www.shogun-toolbox.org/buildbot/builders/deb3%20-%20modular_interfaces/builds/469 blamelist: Soeren Sonnenburg <sonne@debian.org> | 07:36 |
-!- gsomix [~gsomix@178.45.66.191] has joined #shogun | 08:06 | |
-!- gsomix [~gsomix@178.45.66.191] has quit [Ping timeout: 268 seconds] | 08:18 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 08:29 | |
n4nd0 | wiking: around? | 08:30 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 10:10 | |
-!- blackburn [~blackburn@109.226.79.69] has joined #shogun | 10:11 | |
CIA-52 | shogun: Sergey Lisitsyn master * r84fe14c / tests/regression/r_static/preprocessor.R : Fixed naming in R regression preprocessor test - http://git.io/cBlT3Q | 10:42 |
CIA-52 | shogun: Sergey Lisitsyn master * r4a9f485 / (3 files in 2 dirs): Fixed memory handlnig in CV MC storage - http://git.io/YlsfXA | 10:44 |
CIA-52 | shogun: Sergey Lisitsyn master * rd1d57a1 / src/shogun/lib/slep/slep_mc_tree_lr.cpp : Removed scaling in multiclass tree guided LR - http://git.io/wXS9Kg | 10:44 |
CIA-52 | shogun: Sergey Lisitsyn master * rad6f6dc / (3 files): Marked string kernels failing on regression tests as unstable - http://git.io/ee6QDA | 10:58 |
shogun-buildbot_ | build #470 of deb3 - modular_interfaces is complete: Failure [failed test python_modular] Build details are at http://www.shogun-toolbox.org/buildbot/builders/deb3%20-%20modular_interfaces/builds/470 blamelist: Sergey Lisitsyn <lisitsyn.s.o@gmail.com> | 11:12 |
-!- in3xes [~in3xes@122.174.73.211] has joined #shogun | 11:33 | |
shogun-buildbot_ | build #471 of deb3 - modular_interfaces is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/deb3%20-%20modular_interfaces/builds/471 | 12:00 |
-!- blackburn [~blackburn@109.226.79.69] has left #shogun [] | 12:26 | |
-!- yoo [575b08cb@gateway/web/freenode/ip.87.91.8.203] has joined #shogun | 12:29 | |
yoo | hi | 12:29 |
-!- yoo [575b08cb@gateway/web/freenode/ip.87.91.8.203] has quit [Quit: Page closed] | 12:54 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 13:06 | |
n4nd0 | someone out there :)? | 13:08 |
-!- yoo [575b08cb@gateway/web/freenode/ip.87.91.8.203] has joined #shogun | 13:09 | |
-!- yoo [575b08cb@gateway/web/freenode/ip.87.91.8.203] has quit [Client Quit] | 13:09 | |
emrecelikten | Me | 13:10 |
emrecelikten | :P | 13:10 |
n4nd0 | emrecelikten: hehe ok | 13:11 |
n4nd0 | do you happen to know about structured output learning algorithms? | 13:12 |
emrecelikten | No :/ | 13:12 |
n4nd0 | ok, no problem ;) | 13:12 |
n4nd0 | I was just wondering what's the state of the art | 13:12 |
n4nd0 | someone will be able to answer me around here sooner or later, there are some experts | 13:12 |
emrecelikten | :) | 13:13 |
-!- bern4rd [1f22c5b5@gateway/web/freenode/ip.31.34.197.181] has joined #shogun | 13:37 | |
bern4rd | hi | 13:38 |
n4nd0 | hey bern4rd | 13:43 |
n4nd0 | I don't think people are around here right now today | 13:43 |
bern4rd | ah ok, no problem | 13:44 |
n4nd0 | bern4rd: tomorrow I am starting pattern recognition :) | 13:45 |
n4nd0 | bern4rd: are you taking that course at the end? | 13:45 |
bern4rd | me too | 13:45 |
n4nd0 | nice | 13:46 |
-!- Marty28 [9eb54d46@gateway/web/freenode/ip.158.181.77.70] has joined #shogun | 13:47 | |
n4nd0 | bern4rd: I'll update you when you come ;) | 13:47 |
bern4rd | nice :) Anyway 29 I end the internship so i'll be available | 13:48 |
Marty28 | Hi! | 13:48 |
Marty28 | Can you tell me where I can find a documentation for the data in shogun-data-0.3 ? | 13:49 |
n4nd0 | Marty28: for the release of that version? | 13:50 |
n4nd0 | to tell the truth I am not sure there's documentation for the data part of the project ... | 13:50 |
n4nd0 | at least I don't know about it existence | 13:50 |
Marty28 | ok | 13:50 |
n4nd0 | Marty28: what do you want to know in any case? I might be able to help :D | 13:51 |
Marty28 | It is about /asp | 13:51 |
Marty28 | the folder | 13:52 |
Marty28 | seems to be splicing data | 13:52 |
Marty28 | yet no inline documentation | 13:52 |
Marty28 | legacy, probably | 13:53 |
n4nd0 | let's check in the commit messages | 13:54 |
n4nd0 | nothing there | 13:54 |
Marty28 | thx | 13:57 |
Marty28 | seems to be truncated anyway | 13:57 |
Marty28 | says %asplicer definition file version: 1.0 %acceptor splice acc_splice_b=-2.867314e+00 acc_splice_order=22 acc_splice_window_left=60 acc_splice_window_right=79 acc_splice_alphas=[1.408119e+00, 7.82051 acc_splice_svs=[ | 13:57 |
Marty28 | I will ask R?tsch, I bet it is their data. | 13:58 |
n4nd0 | yes I think so | 13:59 |
n4nd0 | have you taken a look to msplice? | 13:59 |
n4nd0 | maybe it is related to it | 13:59 |
Marty28 | mom | 14:00 |
n4nd0 | sorry mgene | 14:00 |
-!- in3xes [~in3xes@122.174.73.211] has quit [Remote host closed the connection] | 14:03 | |
n4nd0 | gtg | 14:04 |
n4nd0 | bye! | 14:04 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 14:04 | |
Marty28 | cya | 14:04 |
-!- bern4rd [1f22c5b5@gateway/web/freenode/ip.31.34.197.181] has quit [Quit: Page closed] | 14:06 | |
-!- Marty28 [9eb54d46@gateway/web/freenode/ip.158.181.77.70] has quit [Quit: Page closed] | 14:07 | |
-!- blackburn [~blackburn@109.226.79.69] has joined #shogun | 16:21 | |
-!- K0stIa [~kostia@alt2.hk.cvut.cz] has joined #shogun | 16:42 | |
-!- K0stIa [~kostia@alt2.hk.cvut.cz] has left #shogun [] | 16:42 | |
--- Log closed Sun Aug 26 17:39:59 2012 | ||
--- Log opened Sun Aug 26 17:51:28 2012 | ||
-!- shogun-toolbox [~shogun@7nn.de] has joined #shogun | 17:51 | |
-!- Irssi: #shogun: Total of 11 nicks [1 ops, 0 halfops, 0 voices, 10 normal] | 17:51 | |
-!- Irssi: Join to #shogun was synced in 7 secs | 17:51 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun | 17:52 | |
n4nd0 | hey blackburn | 17:52 |
blackburn | hey n4nd0 | 17:52 |
n4nd0 | how are you doing? | 17:52 |
n4nd0 | any news with the segmentation application? | 17:52 |
blackburn | fine, and you? | 17:52 |
blackburn | no i didn't make any progress yet | 17:52 |
blackburn | but | 17:52 |
blackburn | I did implement director model | 17:53 |
blackburn | have you seen? | 17:53 |
n4nd0 | no | 17:53 |
n4nd0 | I didn't | 17:53 |
n4nd0 | but cool :) | 17:53 |
blackburn | it works | 17:53 |
blackburn | :) | 17:53 |
blackburn | I added code and python example yesterday | 17:53 |
n4nd0 | let me check | 17:53 |
blackburn | perfomance wise it sucks | 17:54 |
blackburn | but still | 17:54 |
n4nd0 | really? | 17:55 |
n4nd0 | very different from the C++ one? | 17:55 |
blackburn | well multiclass generic training is not efficient already | 17:56 |
blackburn | because of big sparse vectors | 17:56 |
blackburn | and swig stuff makes it pretty slow as well | 17:57 |
n4nd0 | how is it possible that you didn't have to define get_joint_feature_vector? | 17:57 |
n4nd0 | aham there is no need for it in the BMRM algorithm I guess | 17:58 |
blackburn | I made that explicitly in | 17:58 |
blackburn | argmax | 17:58 |
blackburn | BMRM needs only risk | 17:58 |
blackburn | and risk needs only argmax | 17:58 |
n4nd0 | I see | 17:58 |
n4nd0 | I am not sure if BMRM uses argmax too | 17:58 |
blackburn | and that's actually nice | 17:58 |
n4nd0 | apart from inside risk | 17:58 |
blackburn | BMRM uses only risk | 17:58 |
blackburn | but you would need joint feature vectors when computing gradients | 17:59 |
n4nd0 | gradients where? | 18:00 |
blackburn | n4nd0: in risk | 18:05 |
blackburn | subgradients | 18:05 |
blackburn | to be correct | 18:05 |
n4nd0 | blackburn: aham ok, but thanks to generic risk one doesn't need to define it explicitily either | 18:08 |
blackburn | yes | 18:08 |
blackburn | n4nd0: what should I define to use HM-SVM? | 18:11 |
blackburn | what do matrix features contain in means of HM ? | 18:12 |
n4nd0 | the matrix features contain the observations | 18:13 |
blackburn | so if I have observations | 18:14 |
blackburn | [0.3, 0.2, 0.1] | 18:14 |
blackburn | [0.5, 0.5, 0.9] | 18:14 |
blackburn | what would be the matrix? | 18:14 |
blackburn | ^ or transposed ^? | 18:14 |
n4nd0 | ok let's see | 18:15 |
n4nd0 | you should first tell me how many features you have | 18:15 |
n4nd0 | and time steps | 18:15 |
n4nd0 | from what you have written there | 18:15 |
blackburn | okay see what I mean | 18:16 |
blackburn | when we are segmenting image | 18:16 |
n4nd0 | I would guess for 1 feature, 4 time steps and two feature vectors | 18:16 |
blackburn | we go through all image | 18:16 |
blackburn | observations are 9 number | 18:16 |
n4nd0 | ok | 18:16 |
blackburn | s | 18:16 |
blackburn | pixel values | 18:16 |
blackburn | of the pixel and its neighborhood | 18:16 |
blackburn | see what I mean? | 18:16 |
n4nd0 | so for every pixel you get 9 values? | 18:17 |
blackburn | yes exactly | 18:17 |
n4nd0 | ok | 18:17 |
blackburn | assume image is square | 18:17 |
blackburn | so | 18:17 |
blackburn | it requires N*N timesteps | 18:17 |
blackburn | got it? | 18:17 |
n4nd0 | aham ok | 18:17 |
n4nd0 | but the point is that with just an image | 18:18 |
n4nd0 | I don't really see where is the time dimension | 18:18 |
blackburn | n4nd0: you just iterating over all pixels | 18:18 |
n4nd0 | ok got it | 18:21 |
blackburn | so all I need is to define that matrix and sequence | 18:22 |
blackburn | right? | 18:22 |
n4nd0 | wait a sec | 18:22 |
n4nd0 | what matrix and seq? :D | 18:23 |
blackburn | observation matrix and sequence of {foreground,background} | 18:23 |
n4nd0 | yes | 18:23 |
blackburn | oh nice | 18:23 |
blackburn | okay will be done soon then | 18:23 |
n4nd0 | the sequence represent the labels | 18:24 |
n4nd0 | cool! | 18:24 |
blackburn | is there an interface to construct labels? | 18:24 |
blackburn | in python | 18:24 |
n4nd0 | do you want to discuss something more about the matrix features? | 18:24 |
blackburn | n4nd0: last thing not clear for me is | 18:24 |
blackburn | what is #rows and #cols | 18:24 |
n4nd0 | blackburn: to construct the labels just create an HMSVMLabels instance | 18:24 |
n4nd0 | you should be able to do it giving an SGVector in the constructor, i.e. an nparray | 18:25 |
n4nd0 | for python I meant | 18:25 |
blackburn | so I would need to put | 18:25 |
blackburn | 0 and 1 | 18:25 |
blackburn | there | 18:25 |
blackburn | right? | 18:25 |
n4nd0 | yes | 18:25 |
n4nd0 | ok, now about the matrix features | 18:26 |
n4nd0 | the matrix features internally is an SGMatrixList, a list of matrices | 18:26 |
n4nd0 | each matrix in this list represents the observations of an image, ok? | 18:26 |
blackburn | hmmmm | 18:27 |
blackburn | ah | 18:27 |
n4nd0 | if you want to train your HM-SVM using 100 images, your matrix features will be composed of 100 matrices | 18:27 |
blackburn | right sure | 18:27 |
blackburn | yes | 18:27 |
n4nd0 | all right | 18:27 |
blackburn | that's perfect clear | 18:27 |
n4nd0 | now, for each matrix, what are the rows and columns? | 18:27 |
blackburn | yes | 18:28 |
n4nd0 | the number of columns is the same dimension of the sequence | 18:28 |
n4nd0 | the time of this time series | 18:28 |
n4nd0 | you see what I mean? | 18:29 |
n4nd0 | it was not a very good description to tell the truth .... | 18:29 |
blackburn | no it is good | 18:29 |
blackburn | so in case I have 200x200 image | 18:29 |
blackburn | I've got 40000 cols | 18:29 |
n4nd0 | exatly | 18:29 |
blackburn | each containing say 9 observation values | 18:29 |
blackburn | uh I will try on 50x50 first | 18:30 |
blackburn | :D | 18:30 |
n4nd0 | I think that should be your feature dimension | 18:30 |
n4nd0 | hehe | 18:30 |
n4nd0 | got it? | 18:31 |
blackburn | yeah I think I did | 18:31 |
blackburn | lets try to code it | 18:31 |
n4nd0 | ok | 18:32 |
n4nd0 | and let's hope it is a good model for this problem :) | 18:32 |
n4nd0 | it would be really cool if it works | 18:32 |
n4nd0 | I wonder how people find these things out | 18:33 |
blackburn | CRF works for that very nice | 18:33 |
n4nd0 | aham | 18:35 |
n4nd0 | have you taken the inspiration of the features from it? | 18:35 |
blackburn | no I'll try pixels first | 18:35 |
blackburn | it is easy to change features afterwards | 18:35 |
n4nd0 | ok | 18:36 |
blackburn | n4nd0: can state be unknown? | 18:40 |
blackburn | kind of latent state | 18:40 |
blackburn | when you have no idea whether it is a background or a foreground pixel | 18:41 |
n4nd0 | mmm no with the state model we use | 18:41 |
blackburn | yes, I mean in theory | 18:41 |
n4nd0 | yeah in theory | 18:42 |
n4nd0 | I mean, you could add just one state there, unknown | 18:42 |
blackburn | but it would be predicted then | 18:42 |
n4nd0 | not really | 18:42 |
n4nd0 | you could do an state model | 18:43 |
n4nd0 | with this unknown state | 18:43 |
n4nd0 | you model the possibility of going from BG or FG to this unknown state | 18:43 |
n4nd0 | but you don't model the observations from this model | 18:43 |
n4nd0 | it is something similar to the start and stop states that are internally used in the TwoStateModel | 18:43 |
n4nd0 | sorry | 18:44 |
blackburn | I see | 18:44 |
n4nd0 | but you don't model the observations from this state | 18:44 |
n4nd0 | I said "from this model" before, that was wrong | 18:44 |
blackburn | yes I got it | 18:44 |
n4nd0 | :) | 18:44 |
n4nd0 | but this should work fine with some noise in there | 18:44 |
n4nd0 | so I think you can still use the TwoStateModel and maybe it is not affected badly | 18:45 |
blackburn | okay lets see what it would be | 18:45 |
n4nd0 | what kind of images are you using? | 18:47 |
blackburn | cat :D | 18:47 |
blackburn | one image yet | 18:47 |
n4nd0 | isn't it a pain to "label" it? | 18:48 |
blackburn | I asked my gf before | 18:48 |
blackburn | :D | 18:48 |
n4nd0 | I mean to generate this background/foreground vector | 18:48 |
n4nd0 | did she do it? | 18:49 |
blackburn | yes for one image | 18:49 |
blackburn | it is not that hard | 18:49 |
n4nd0 | not hard but kind of pain in the ass to do it | 18:50 |
n4nd0 | I guess it depens on the method too | 18:50 |
blackburn | no not really, takes 3-4 minutes | 18:51 |
n4nd0 | aham not that bad then | 18:51 |
n4nd0 | do you use something like a GUI? | 18:51 |
n4nd0 | or? | 18:51 |
blackburn | https://dl.dropbox.com/u/10139213/segm/image.jpg | 18:51 |
blackburn | https://dl.dropbox.com/u/10139213/segm/mask.jpg | 18:51 |
blackburn | I am going to teach HM model on that image and predict | 18:52 |
blackburn | and we will see what it models | 18:52 |
n4nd0 | just for an image? | 18:53 |
blackburn | yes | 18:53 |
n4nd0 | let's see | 18:53 |
blackburn | n4nd0: hmsvmlabels segfault on adding if default constructor is called | 18:55 |
blackburn | one should be careful :) | 18:55 |
n4nd0 | ups | 18:56 |
n4nd0 | probably it is because no memory is allocated for the DynamicObjectArray in StructuredLabels | 18:59 |
blackburn | n4nd0: is loss relevant? | 19:00 |
n4nd0 | could be | 19:00 |
n4nd0 | we just have hinge loss this far though | 19:01 |
blackburn | I see | 19:01 |
blackburn | SystemError: Out of memory error, tried to allocate 7378945280 bytes using malloc. | 19:02 |
blackburn | uh | 19:02 |
n4nd0 | dafuq | 19:03 |
n4nd0 | really? just with one image | 19:03 |
n4nd0 | it looks weird to me | 19:03 |
blackburn | something is wrong I believe ;) | 19:03 |
n4nd0 | yeah I think so | 19:03 |
n4nd0 | how big is the image? | 19:03 |
blackburn | 50 x 50 now | 19:04 |
n4nd0 | there shouldn't be a memory error for that indeed | 19:04 |
blackburn | I get 0 | 19:05 |
n4nd0 | ? | 19:05 |
blackburn | w=0 | 19:05 |
n4nd0 | not good | 19:06 |
n4nd0 | what training algorithm? | 19:06 |
n4nd0 | BMRM? | 19:06 |
blackburn | yes | 19:06 |
n4nd0 | send me the code and I can try to train it with PrimalMosekSOSVM | 19:06 |
blackburn | again - | 19:06 |
blackburn | # of cols | 19:07 |
blackburn | = time dimension | 19:07 |
blackburn | right? | 19:07 |
n4nd0 | yes, let me check just in case | 19:07 |
n4nd0 | in this case it should be 2500 | 19:07 |
blackburn | I do not use border pixels | 19:07 |
blackburn | so feature matrix | 19:08 |
blackburn | is | 19:08 |
blackburn | (2304, 9) | 19:08 |
blackburn | is that correct? | 19:08 |
n4nd0 | mmm | 19:08 |
n4nd0 | number of columns is the 2nd dimension | 19:08 |
blackburn | yes | 19:08 |
n4nd0 | shouldn't it be the other way? | 19:08 |
blackburn | I am a little mixed | 19:09 |
n4nd0 | num_rows = 9 | 19:09 |
n4nd0 | num_cols = 2304 | 19:09 |
blackburn | that's how it should be? | 19:09 |
blackburn | no it is now | 19:09 |
n4nd0 | yes | 19:09 |
n4nd0 | the length of the state sequence | 19:10 |
n4nd0 | should be equal to the second dimension of each element in matrix features | 19:10 |
blackburn | oh | 19:12 |
blackburn | not zero now | 19:12 |
n4nd0 | cool | 19:13 |
n4nd0 | what was it? | 19:13 |
blackburn | ehm | 19:13 |
blackburn | how to extract sequence now | 19:13 |
blackburn | from a prediction | 19:13 |
blackburn | :) | 19:13 |
n4nd0 | so apply gives you StructuredLabels right? | 19:14 |
blackburn | yeah | 19:14 |
blackburn | got it | 19:14 |
n4nd0 | use HMSVMLabels::obtain_generic to get HMSVMLabels | 19:14 |
blackburn | I don't have to | 19:14 |
n4nd0 | and later just do a get_label(0) | 19:14 |
n4nd0 | ok... | 19:14 |
blackburn | I should rather obtain Sequence from generic | 19:14 |
blackburn | get_label(0) works already - it is defined in structured labels | 19:15 |
n4nd0 | true | 19:15 |
n4nd0 | the frist obtain_generic is not really necessary | 19:15 |
n4nd0 | loss of time :D | 19:15 |
blackburn | argh | 19:15 |
n4nd0 | what? | 19:15 |
blackburn | typemap didn't work | 19:15 |
blackburn | I've got sgvector | 19:15 |
-!- in3xes [~in3xes@122.174.73.211] has joined #shogun | 19:16 | |
blackburn | I will add a getter there | 19:17 |
n4nd0 | yes | 19:18 |
n4nd0 | I did that once but didn't commit it | 19:18 |
n4nd0 | I thought I was doing something wrong in python | 19:18 |
n4nd0 | why the typemap doesn't work? | 19:18 |
blackburn | no idea | 19:19 |
blackburn | I hope it would be typemapped now | 19:22 |
blackburn | everything is a background | 19:24 |
blackburn | :D | 19:24 |
n4nd0 | too bad | 19:25 |
n4nd0 | but I think that the concept of training with one image is weird | 19:25 |
blackburn | but it should fit to the image, right? | 19:26 |
n4nd0 | if we map it to non-structured learning | 19:26 |
n4nd0 | I don't really know to tell the truth | 19:26 |
n4nd0 | from the point of view of the HM-SVM it looks like that | 19:26 |
n4nd0 | but if you think of the training algorithm | 19:26 |
blackburn | if we train svm with one point | 19:26 |
n4nd0 | you just have one example | 19:27 |
blackburn | it won't make an error | 19:27 |
n4nd0 | so it is weird | 19:27 |
n4nd0 | I don't think that is the right way of thinking of it ... | 19:27 |
blackburn | oh I've got something | 19:27 |
n4nd0 | what? | 19:27 |
blackburn | some strange pattern | 19:27 |
n4nd0 | in any case | 19:31 |
n4nd0 | I think that for one example it is not going to work pretty good | 19:31 |
n4nd0 | at least it gives me that feeling regarding the way in which the training algorithm works | 19:32 |
blackburn | yes | 19:32 |
blackburn | but still | 19:32 |
n4nd0 | hehe you are hard to convince | 19:33 |
blackburn | lambda changes the game | 19:33 |
blackburn | well I saw approximately correct result | 19:33 |
n4nd0 | we need some sort of model selection for that lambda very badly | 19:35 |
blackburn | http://research.microsoft.com/en-us/um/people/jckrumm/WallFlower/TestImages.htm | 19:36 |
blackburn | n4nd0: I'll try on that ^ | 19:36 |
n4nd0 | ok | 19:37 |
n4nd0 | it seems a different concept this of brackground maintenance | 19:37 |
n4nd0 | but probably we can use the data for our purpose? | 19:37 |
blackburn | why different? | 19:48 |
blackburn | ah | 19:48 |
blackburn | something is kinda different yes | 19:49 |
blackburn | n4nd0: https://dl.dropbox.com/u/10139213/segm/cat.png | 20:07 |
blackburn | not that bad in general | 20:07 |
n4nd0 | just one image in training? | 20:08 |
blackburn | yes | 20:08 |
n4nd0 | yes, not too bad | 20:08 |
n4nd0 | it will be better with more probably I guess | 20:08 |
blackburn | sure | 20:08 |
blackburn | and with better features | 20:08 |
n4nd0 | can you use the images there then? | 20:09 |
blackburn | from microsoft research? | 20:09 |
blackburn | I'll try later | 20:09 |
n4nd0 | yeah | 20:09 |
n4nd0 | ok | 20:10 |
n4nd0 | brb | 20:10 |
blackburn | I need to learn blender, no way to get back to 3ds max I know pretty well :( | 20:14 |
blackburn | time to update NEWS | 20:15 |
n4nd0 | for what do you want to use blender? | 20:16 |
n4nd0 | you don't like 3D max? | 20:16 |
blackburn | to generate synthetic images | 20:16 |
blackburn | I don't have windows | 20:16 |
n4nd0 | synthetic images? | 20:17 |
blackburn | n4nd0: yes, for example some vase to segment :) | 20:17 |
blackburn | or anything like that | 20:17 |
blackburn | car | 20:17 |
blackburn | anything | 20:17 |
n4nd0 | aham | 20:17 |
n4nd0 | in 3D? | 20:18 |
blackburn | yes | 20:18 |
-!- in3xes [~in3xes@122.174.73.211] has quit [Ping timeout: 256 seconds] | 20:18 | |
n4nd0 | time for dinner, see you later | 20:18 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 20:18 | |
CIA-52 | shogun: Sergey Lisitsyn master * re0696f4 / src/interfaces/python_modular/DenseFeatures_protocols.i : Merge pull request #758 from Nightrain/testbranch - http://git.io/sRVNOQ | 21:10 |
CIA-52 | shogun: Sergey Lisitsyn master * r1468c7e / src/shogun/labels/MulticlassLabels.cpp : Changed get binary for class method - http://git.io/4P2KRw | 22:16 |
shogun-buildbot | build #475 of deb3 - modular_interfaces is complete: Failure [failed compile csharp_modular] Build details are at http://www.shogun-toolbox.org/buildbot/builders/deb3%20-%20modular_interfaces/builds/475 blamelist: Sergey Lisitsyn <lisitsyn.s.o@gmail.com> | 22:50 |
shogun-buildbot | build #476 of deb3 - modular_interfaces is complete: Failure [failed compile csharp_modular] Build details are at http://www.shogun-toolbox.org/buildbot/builders/deb3%20-%20modular_interfaces/builds/476 blamelist: Sergey Lisitsyn <lisitsyn.s.o@gmail.com> | 23:06 |
CIA-52 | shogun: Sergey Lisitsyn master * r0e7695a / (4 files in 2 dirs): Fixed csharp crasher with making sequence data protected - http://git.io/OvKx4g | 23:39 |
-!- zxtx [~zv@cpe-75-83-151-252.socal.res.rr.com] has quit [Ping timeout: 248 seconds] | 23:56 | |
--- Log closed Mon Aug 27 00:00:17 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!