--- Log opened Mon Feb 13 00:00:19 2012 | ||
-!- axitkhurana [d2d43a6f@gateway/web/freenode/ip.210.212.58.111] has quit [Quit: Page closed] | 00:01 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 00:23 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Quit: Leaving] | 01:23 | |
-!- dfrx [~f-x@inet-hqmc08-o.oracle.com] has joined #shogun | 05:26 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 09:26 | |
CIA-18 | shogun: Steve Lianoglou master * r0e2fafe / src/.r-install.sh : (log message trimmed) | 09:39 |
---|---|---|
CIA-18 | shogun: 1/2-way fix for r_static install procedure. | 09:39 |
CIA-18 | shogun: * The structure of the package.rds file has been broken for some time, | 09:39 |
CIA-18 | shogun: this fixes it by adding a $DESCRIPTION['Built'] character vector. It | 09:39 |
CIA-18 | shogun: seems that this has been required since (I believe, at least) R 2.13. | 09:39 |
CIA-18 | shogun: * Cleaned up code to set RVERSION, PLATFORM, and OSTYPE. The current | 09:39 |
CIA-18 | shogun: way to build RVERSION is improved and allows for sg to be built against | 09:39 |
CIA-18 | shogun: Soeren Sonnenburg master * rc08a984 / src/.r-install.sh : | 09:39 |
CIA-18 | shogun: Merge pull request #371 from lianos/master | 09:39 |
CIA-18 | shogun: This is a 1/2-way fix for the install procedure for the r_static interface. - http://git.io/ikR2mQ | 09:39 |
-!- sonne|work [~sonnenbu@194.78.35.195] has joined #shogun | 09:46 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 10:24 | |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has joined #shogun | 10:31 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 10:36 | |
blackburn | wiking: im still waiting for JS kernel results :D | 10:59 |
wiking | blackburn: no fucking way | 10:59 |
blackburn | it is TOO slow :) | 10:59 |
wiking | hahahah | 10:59 |
wiking | in the meanwhile i've got some new kernels coming up | 10:59 |
wiking | we'll see about their results | 10:59 |
blackburn | really? which one? | 10:59 |
blackburn | ones* | 10:59 |
wiking | but yeah a little bit of parallel loop there in the compute function would do some good | 10:59 |
wiking | some generalization of js-kernel, like jensen-renyi kernel | 10:59 |
blackburn | I'm going to merge vlfeat things | 10:59 |
wiking | into shogun? | 10:59 |
blackburn | yeah | 10:59 |
wiking | great! :> | 10:59 |
wiking | homokernelmap? | 10:59 |
blackburn | yeah I'm thinking about it | 10:59 |
blackburn | well maybe just provide some adapter to vlfeat's code | 10:59 |
wiking | that'd be great | 10:59 |
wiking | the homo kernel map is only one .c code in the vlfeat proj | 10:59 |
wiking | so could be easily copied i think | 10:59 |
wiking | and the code is BSD licensed | 10:59 |
wiking | so no worries there | 10:59 |
wiking | just wrap a c++ class around it and that's it i think | 10:59 |
blackburn | yeah I'm going to do it once I get some vodka | 10:59 |
blackburn | :D | 10:59 |
wiking | hahahah | 10:59 |
wiking | and maybe some stuff you'll need from vl/mathop.h | 11:00 |
wiking | [master vl] $ wc -l homkermap.c [wiking@welitron:~/vlfeat/vl] | 11:00 |
wiking | 501 homkermap.c | 11:00 |
wiking | so 501 lines of code+comment | 11:00 |
blackburn | hmmm | 11:00 |
blackburn | looks like it is pretty simple | 11:00 |
wiking | but yeah make it nice so that the new JS derivative kernels i could implement fast as well within the homokernelmap | 11:01 |
wiking | as all the kernels i've checked now is positive definite | 11:01 |
wiking | and summing kernel | 11:02 |
wiking | so i'm hoping to be able to produce the homo-kernel-map of it as well | 11:02 |
blackburn | for some strange reasone first word of this map makes me smile now | 11:03 |
blackburn | would be funny to call it homogaykernelmap | 11:03 |
blackburn | :D | 11:03 |
blackburn | is there any conceptual difference between JS and JR kernels? | 11:04 |
wiking | yep | 11:04 |
wiking | js is based on shannon entropy | 11:05 |
wiking | jr is renyi entropy | 11:05 |
wiking | and renyi entropy is a more generalized version of the classical version of (shannon) entropy in information theory | 11:05 |
wiking | anyhow i'll run some test before doing anything | 11:06 |
wiking | i'm just a bit sick now | 11:06 |
wiking | so everything is slower | 11:06 |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has quit [Quit: Page closed] | 11:16 | |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has joined #shogun | 11:16 | |
blackburn | wiking: LSVM things looks very promising | 11:42 |
wiking | ? | 11:43 |
blackburn | I just checked shogun gsoc idea text about lsvm :) | 11:44 |
blackburn | sonne|work: damn how to catch you | 11:45 |
blackburn | wiking: did you already implement a JR kernel? | 11:48 |
wiking | about to finish | 11:48 |
blackburn | nice | 11:50 |
-!- dfrx [~f-x@inet-hqmc08-o.oracle.com] has quit [Quit: Leaving.] | 12:32 | |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has joined #shogun | 12:35 | |
wiking | blackburn: yeah that'd be cool to do it maybe this year | 12:35 |
wiking | as gsoc | 12:35 |
wiking | i mean l-ssvm | 12:35 |
blackburn | we should check if alex binder is available for mentoring | 12:35 |
wiking | heheh | 12:39 |
wiking | he's not coming around here ? | 12:40 |
blackburn | no, never seen him here | 12:40 |
wiking | mailing list i guess then | 12:40 |
wiking | blackburn: r u doing the homo-kernel-mapping stuff now? | 12:41 |
wiking | or should i dig myself into it ? | 12:41 |
blackburn | no, I'm on my job right now | 12:41 |
blackburn | hah feel free if you want | 12:41 |
wiking | heheh ok | 12:41 |
blackburn | wiking: I don't mind you doing anything ;) | 12:42 |
wiking | eheheheh i was just meaning that let's not do double work if possible :>> | 12:42 |
wiking | ah yeah | 12:43 |
wiking | can you accept a pull request? | 12:43 |
blackburn | while I'm working last year undergraduate I have not much time | 12:43 |
blackburn | yes, sure | 12:43 |
wiking | ok so should i do an official request or can u just pull this one in | 12:43 |
blackburn | [15:42] <wiking> eheheheh i was just meaning that let's not do double work if possible :>> ---- yeah, we shouldn't :) | 12:43 |
wiking | https://github.com/vigsterkr/shogun/commit/4d01e40b85aa928d9d49dfad9a2d4cba2a14504e | 12:43 |
wiking | it's a little bit of less operations this way | 12:44 |
blackburn | hmm sure | 12:44 |
wiking | you know 2 division vs 1 multiplication ;) | 12:44 |
wiking | so it should speed up things | 12:44 |
wiking | but then again | 12:44 |
wiking | not that much imho ;) | 12:45 |
blackburn | wiking: but I can't it from job, will be able to merge it a little later | 12:45 |
blackburn | do it* | 12:45 |
blackburn | can't do it* | 12:45 |
blackburn | :) | 12:45 |
wiking | no worries | 12:45 |
wiking | just don't forget | 12:45 |
wiking | ;P | 12:45 |
blackburn | just make a pull request and I won't forget for sure | 12:45 |
wiking | ok so now i'm testing the new kernels :DDD | 12:45 |
wiking | ah ok | 12:45 |
wiking | so let's see the thingy | 12:45 |
wiking | i guess it's gonna take some time now :P | 12:45 |
wiking | done | 12:46 |
blackburn | thanks | 12:48 |
wiking | nw | 12:49 |
blackburn | wiking: btw it is not a division/multiplication improvement | 12:49 |
wiking | it is | 12:49 |
wiking | ;) | 12:49 |
blackburn | there are some ifs | 12:49 |
blackburn | so it couldn't be optimized | 12:49 |
wiking | ?? | 12:49 |
blackburn | I think gcc optimizes (1/const) things | 12:49 |
wiking | how? :) | 12:50 |
wiking | i mean a division is a division | 12:50 |
wiking | so if i do a/2 and b/2 | 12:50 |
blackburn | at least with ffast-math | 12:50 |
wiking | how? :) | 12:50 |
blackburn | well now I'm sure | 12:51 |
blackburn | I'mnot | 12:51 |
blackburn | :) | 12:51 |
blackburn | wiking: yeah there are some -unsafe-math-operations and etc | 12:52 |
blackburn | it enables to compute x*(1/y) instead of x/y | 12:52 |
blackburn | this case contained two 0.5* or /2 but now only one | 12:53 |
blackburn | that's what I meant | 12:54 |
wiking | :P | 13:09 |
wiking | it should be a bit faster now, but again it should be parallelized | 13:09 |
wiking | is there a reason not to have openmp in shogun? | 13:09 |
blackburn | no, feel free to openmp it | 13:09 |
wiking | ok | 13:10 |
blackburn | but you should safe it with #ifdef | 13:10 |
wiking | yeah sure | 13:10 |
wiking | that'll have to do some changes in the configure script as well | 13:10 |
blackburn | the problem is kernels are parallelized already | 13:10 |
wiking | anyhow then i'll do that meanwhile i'm doing the test | 13:10 |
wiking | in what sense? | 13:10 |
blackburn | e.g. one thread compute one row of kernel matrix | 13:11 |
blackburn | (if kernel matrix is being computed) | 13:11 |
wiking | i mean currently when i'm running training | 13:11 |
wiking | it only runs on one core | 13:11 |
blackburn | yes | 13:11 |
wiking | so i've figured that in this case the for loop could be sliced up between the number of cores | 13:12 |
wiking | and just do the calc in separate threads | 13:12 |
blackburn | IIRC Soeren doesn't mind openmping it as well as I | 13:12 |
blackburn | sure, makes sense for me too | 13:12 |
wiking | ok | 13:12 |
wiking | ok i'll then send the pull request when it's ready | 13:13 |
blackburn | wiking: I would suggest to do following: | 13:13 |
blackburn | introduce flag in Kernel.h | 13:13 |
blackburn | use_openmp | 13:14 |
blackburn | or so | 13:14 |
wiking | ? | 13:14 |
wiking | why not an ifdef | 13:14 |
blackburn | and do two branches in each kernel | 13:14 |
wiking | that checks a macro | 13:14 |
wiking | that is set by the configure script | 13:14 |
blackburn | hmmm imagine you need to compute whole kernel matrix | 13:14 |
blackburn | then it is using 2 cores already | 13:15 |
blackburn | i.e. it runs compute() at the same time | 13:15 |
blackburn | for different rows | 13:15 |
blackburn | and if compute() wants two threads as well | 13:15 |
blackburn | it would be slower | 13:15 |
wiking | ah got it | 13:16 |
blackburn | so in computing kernel matrix things we could disable openmp if we are using multiple threads | 13:16 |
wiking | or of course | 13:16 |
blackburn | I think double parallelization could be bad | 13:16 |
wiking | you could even do better by setting the number of threads openmp allowed to use | 13:17 |
wiking | and then if you have like 8 cores... | 13:17 |
wiking | anyhow this is kind of the last thing i wanna test :))) | 13:17 |
wiking | more interested in other kernels | 13:18 |
blackburn | something like 2 for kernel matrix and 6 for openmp inside compute()? | 13:18 |
wiking | and maybe the homkernelmapping.. | 13:18 |
wiking | yep | 13:18 |
blackburn | kinda complex thing.. | 13:18 |
blackburn | no idea what would be best | 13:18 |
wiking | you can tell openmp how many threads it can use in it's threadpool | 13:18 |
blackburn | yeah I know | 13:18 |
blackburn | we have pretty dirty parallelization, should be cleaned indeed | 13:19 |
blackburn | I'm just not sure whether openmp can handle some complex cases | 13:20 |
wiking | hehehe | 13:21 |
wiking | i'll try to do some test | 13:21 |
wiking | anyhow let's see the new kernel :))) | 13:21 |
blackburn | yeah I'm pretty curious | 13:21 |
wiking | me too | 13:22 |
wiking | but it's even worse | 13:22 |
wiking | computing time wise :P | 13:22 |
blackburn | what is the equation? | 13:22 |
wiking | exp (....) | 13:22 |
wiking | ;) | 13:22 |
blackburn | worst kernel ever: exp(sin+cos+tg+arctg+exp(exp(exp)) | 13:23 |
wiking | :D | 13:23 |
blackburn | explicitly mapping to infinite-dimensional space | 13:26 |
blackburn | :) | 13:26 |
wiking | mmm fuck | 13:27 |
wiking | i should test this with custom kernel | 13:27 |
blackburn | why? | 13:32 |
blackburn | wiking: btw are you still using java? | 13:34 |
wiking | well | 13:34 |
wiking | gotta | 13:34 |
wiking | by data set is kind of coming from mahout/hadoop | 13:34 |
wiking | and i don't want to now do data transcoding funcitons | 13:34 |
wiking | it's easier this way | 13:34 |
blackburn | I see | 13:34 |
blackburn | I can't live without matplotlib :) | 13:35 |
wiking | btw: sum(a^q) * sum (b^q) = sum((a*b)^q) | 13:35 |
wiking | ah no | 13:35 |
wiking | that is for sure not true | 13:36 |
wiking | :))) | 13:36 |
blackburn | true for the most common case of d=1 :D | 13:37 |
wiking | :>> | 13:38 |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 13:45 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 13:45 | |
n4nd0 | blackburn, hi | 13:48 |
blackburn | hi there | 13:48 |
n4nd0 | blackburn, about the ConjugateIndex class | 13:48 |
blackburn | yeah I'm curious whether this shit works or not | 13:48 |
blackburn | :) | 13:48 |
n4nd0 | blackburn, I have seen one example in the python modular interface using it but it cannot be imported | 13:48 |
blackburn | mmm do you have lapack? | 13:48 |
n4nd0 | blackburn, I get an error from the interpreter, I didn't find it in the doc. either | 13:49 |
n4nd0 | I don't think I have it because right now I don't even know what is it :P | 13:49 |
blackburn | remind LPBoost thing, there was an #ifdef HAVE_CPLEX | 13:49 |
blackburn | hmm you would have to install it :) | 13:50 |
n4nd0 | ok | 13:50 |
blackburn | n4nd0: I can't recall which os do u use | 13:50 |
n4nd0 | since I didn't find it in the doc I thought that maybe the name had changed or sth | 13:50 |
blackburn | any ubuntu/debian? | 13:50 |
n4nd0 | blackburn, ubuntu | 13:50 |
blackburn | nice | 13:51 |
blackburn | then just install everything having name *lapack :) | 13:51 |
blackburn | and reconfigure | 13:51 |
blackburn | n4nd0: btw to save time you could also install | 13:54 |
blackburn | libsuperlu3 | 13:54 |
blackburn | and libarpack | 13:54 |
blackburn | and libsuperlu3-dev | 13:54 |
blackburn | and libarpack-dev | 13:54 |
blackburn | that enables to achieve better performance while dim.reduction | 13:54 |
n4nd0 | blackburn, thanks :) | 13:54 |
n4nd0 | those will save me some configure and compilation times :D | 13:54 |
blackburn | no idea whether you would need or not, but you may try at least | 13:54 |
blackburn | you may also use --disable-optimizations flag | 13:55 |
blackburn | in configure | 13:55 |
blackburn | e.g. ./configure --disable-optimizations --interfaces=python_modular | 13:55 |
blackburn | and do make with -j2 or -j4 (number of threads) | 13:55 |
n4nd0 | number of threads to parallelize the compilation? | 13:56 |
blackburn | yes | 13:57 |
n4nd0 | cool | 13:57 |
blackburn | for example 'make -j4' | 13:57 |
n4nd0 | the disable optimization sounds bad though ... isn't good to optimize :P? | 13:57 |
blackburn | n4nd0: but it would compile faster then | 13:59 |
blackburn | you may see no difference between 1.5s and 1.0s walltime :) | 14:00 |
n4nd0 | it find the ConjugateIndex now :D | 14:12 |
blackburn | ;sure | 14:13 |
wiking | WTF | 14:21 |
wiking | it's still counting | 14:21 |
wiking | :)))) | 14:21 |
wiking | just training i mean | 14:21 |
wiking | yess! training done | 14:25 |
n4nd0 | :) | 14:25 |
n4nd0 | blackburn, do you know about this error, it has appeared while training the ConjugateIndex | 14:25 |
n4nd0 | ldc must be >= MAX(M,1): ldc=0 M=0Parameter 14 to routine cblas_dgemm was incorrect | 14:25 |
blackburn | hah | 14:26 |
blackburn | damn | 14:26 |
blackburn | I know what is the error | 14:26 |
blackburn | could you please change your labels from -1 and 1 to 0,1? | 14:27 |
blackburn | it is a bug | 14:27 |
n4nd0 | sure, no problem ;) | 14:27 |
n4nd0 | I have seen in the examples that they use -1 and 1 for two class problems | 14:28 |
n4nd0 | is that the convention in shogun? | 14:28 |
blackburn | yes | 14:28 |
n4nd0 | ok | 14:28 |
blackburn | but conjugateindex is general multiclass | 14:28 |
blackburn | so I just forgot to implement two 'modes' | 14:28 |
blackburn | binary and multiclass | 14:28 |
blackburn | for multiclass convention is 0,1,2,.. _without_ any gaps | 14:29 |
n4nd0 | ok | 14:29 |
n4nd0 | it looks better now, it is taking some time for the training | 14:29 |
blackburn | yeah it is pretty expensive | 14:29 |
n4nd0 | cool, it trained it, let's see what do I get | 14:29 |
n4nd0 | I have to check a bit of this ConjugateIndex | 14:30 |
n4nd0 | it is completely a black box for me right now | 14:30 |
blackburn | it computes matrix X^T (X^T X)^-1 X | 14:32 |
blackburn | for each class | 14:32 |
blackburn | X is a class feature matrix | 14:32 |
blackburn | then computes x^T (X^T (X^T X)^-1) X) x /<x,x> | 14:33 |
blackburn | and chooses nearest class for x | 14:33 |
blackburn | something like that | 14:33 |
blackburn | n4nd0: so how it was? | 14:35 |
blackburn | I bet it is bad :) | 14:35 |
n4nd0 | blackburn, mmm I am trying to use the class to assess it AccuracyMeasure | 14:37 |
n4nd0 | but I got an error because the labels in the ground truth are 0 or 1 | 14:38 |
n4nd0 | and it seems that it needs -1 and 1 | 14:38 |
blackburn | use MulticlassAccuracy | 14:38 |
blackburn | accuracy measure is for -1, 1 and it compares signs of classes | 14:39 |
n4nd0 | ok | 14:40 |
n4nd0 | I have tested the accuracy for the training set and it is 0.67 | 14:42 |
blackburn | hah | 14:42 |
blackburn | what was other results? | 14:42 |
blackburn | svm, knn? | 14:42 |
n4nd0 | this the first one I try :) | 14:43 |
n4nd0 | I am going to check a couple more now | 14:43 |
blackburn | ah okay | 14:43 |
n4nd0 | but this is just using the intensity pixels, no feature | 14:44 |
blackburn | yes, conjugate thing was designed to cope with pixels | 14:44 |
n4nd0 | aham | 14:44 |
blackburn | but acts badly anyway haha | 14:44 |
n4nd0 | haha | 14:44 |
blackburn | with svm be sure you returned back to -1,1 | 14:44 |
n4nd0 | ok | 14:46 |
n4nd0 | what is the use of the cache for the kernels? I have seen this parameter size cache that the constructors have | 14:47 |
blackburn | to store some k(x_i,x_j) values | 14:47 |
blackburn | to avoid useless computations | 14:48 |
-!- dfrx [~f-x@inet-hqmc02-o.oracle.com] has joined #shogun | 14:52 | |
blackburn | n4nd0: for example when training svms it is very likely that kernels between some support vectors would be calculated many times | 14:53 |
-!- dfrx [~f-x@inet-hqmc02-o.oracle.com] has left #shogun [] | 14:53 | |
n4nd0 | blackburn, I got it ;-) | 14:56 |
n4nd0 | blackburn, some kind of memorization like in DP | 14:56 |
blackburn | what is DP? | 14:57 |
n4nd0 | dynamic programming | 14:58 |
n4nd0 | blackburn, the accuracy with LibSVM is much more promising, 0.9987 (maybe a bit too hight??) | 15:01 |
n4nd0 | blackburn, that is with the training set | 15:01 |
n4nd0 | I have to split it anyway to get a test set | 15:01 |
blackburn | n4nd0: which kernel did you use? | 15:03 |
n4nd0 | polynomial, degree 2 | 15:03 |
blackburn | not really high, depends on C | 15:03 |
n4nd0 | C = 1.0 | 15:03 |
blackburn | it is ok for train set | 15:03 |
blackburn | try to split it :) | 15:03 |
n4nd0 | cool | 15:03 |
n4nd0 | I have to go now | 15:04 |
n4nd0 | see you later | 15:04 |
blackburn | ok, see you | 15:04 |
-!- n4nd0 [~n4nd0@s83-179-44-135.cust.tele2.se] has quit [Quit: Leaving] | 15:04 | |
wiking | ok | 15:09 |
wiking | here's a new pull request | 15:09 |
wiking | ;) | 15:09 |
wiking | damn how could i only select 1 commit for the pull request | 15:11 |
wiking | there's a bug in HIK | 15:11 |
blackburn | bug? | 15:12 |
wiking | yep | 15:12 |
wiking | nothing too serious | 15:12 |
wiking | but | 15:12 |
blackburn | tell me :) | 15:12 |
wiking | https://github.com/vigsterkr/shogun/commit/10d44a2bd4d497ec9ad4dd5b98be72afe9f52022 | 15:13 |
wiking | check that | 15:13 |
blackburn | haha you scared me | 15:13 |
blackburn | wiking: just do it in one pull request | 15:14 |
wiking | can't | 15:14 |
wiking | i mean yeah | 15:14 |
blackburn | why not? | 15:14 |
wiking | what i could do is that take the optimization + this bug | 15:14 |
wiking | in one pull request | 15:14 |
blackburn | yes | 15:14 |
blackburn | it is better to multiply entities ;) | 15:15 |
blackburn | occam restricted that | 15:15 |
wiking | mmm | 15:20 |
wiking | by the way | 15:20 |
wiking | do you know BoV? | 15:20 |
wiking | i guess so | 15:20 |
wiking | do you think that it would make a difference that instead a (squared)euclideandistance to use some kind of a different distance measure, like JSdistance? :) | 15:21 |
blackburn | I am afraid I don't | 15:21 |
blackburn | what is BoV? | 15:21 |
wiking | bag of visual words | 15:21 |
wiking | or bow | 15:22 |
wiking | or bovw | 15:22 |
blackburn | ah | 15:22 |
wiking | as you feel | 15:22 |
wiking | so when i'm doing the clustering | 15:22 |
wiking | e.g. k-means | 15:22 |
blackburn | I didn't really get into it :( | 15:22 |
wiking | the distance measure is dot product atm | 15:22 |
wiking | but i think that it's bs | 15:22 |
wiking | as those sift outputs | 15:22 |
wiking | are as well histograms | 15:22 |
wiking | not just some random values | 15:22 |
wiking | anyhnow | 15:23 |
wiking | i'm getting new results | 15:23 |
wiking | the JR currently sucks :P | 15:23 |
wiking | but we'll see with different arguments | 15:23 |
blackburn | hmm no idea about distance | 15:25 |
blackburn | I should dig into BoVW a little | 15:25 |
wiking | heheh well if u have any questions just shoot | 15:31 |
wiking | been digging in that shit for a while now :P | 15:31 |
blackburn | thanks! | 15:33 |
blackburn | wiking: what is a typical measure between sift descriptors? | 15:39 |
blackburn | distance or similarity or so | 15:39 |
blackburn | and how does it match points? :) | 15:39 |
blackburn | I believe answers are in papers, but anyway :) | 15:41 |
wiking | ah there are several way for matching sift descriptors | 15:50 |
wiking | although i'm not doing that at all :)) | 15:50 |
blackburn | then how do you use any classifiers? | 15:53 |
wiking | well i'm using sift just to create visual words | 16:01 |
wiking | and then from visual words i create a visual words frequency vector | 16:01 |
wiking | and that's fed to the classifier | 16:02 |
blackburn | what is visual word in that meaning? | 16:02 |
wiking | well you get a lot of sift descriptors from a lot of images | 16:03 |
wiking | and then do an 'averaging' i.e. use a cluster algo | 16:03 |
wiking | to create visual words | 16:03 |
wiking | i.e. a codebook | 16:03 |
wiking | so that the clustering algorithm basically creates N number of clusters | 16:04 |
wiking | and u assigned each sift descriptor to the nearest cluster point | 16:04 |
blackburn | ok I found some presentation | 16:06 |
blackburn | example of word is eye on people photo | 16:07 |
blackburn | right? | 16:07 |
blackburn | oh it seems I'm getting into it | 16:07 |
blackburn | for new unclassified image | 16:07 |
blackburn | you compute sift descriptors | 16:07 |
blackburn | and assign every localized descriptor to nearest cluster | 16:08 |
blackburn | and then you have e.g. vector [#eyes #cars #bikes] | 16:08 |
blackburn | wiking: something like that? | 16:08 |
wiking | mmmnot really | 16:09 |
wiking | :) | 16:09 |
wiking | dman | 16:09 |
wiking | JR is still not good | 16:09 |
blackburn | ok what you are doing once you get descriptors? | 16:10 |
blackburn | assuming on train set you did clustering of various points of interest | 16:11 |
wiking | so when u've finished the clustering | 16:11 |
wiking | u basically assign each sift in an image to a cluster point (the nearest one) | 16:11 |
wiking | after this you count how many times a given cluster point is present within an image | 16:11 |
blackburn | each sift in an unclassified image? | 16:11 |
wiking | i.e. you'll get a term frequency | 16:11 |
blackburn | yeah I thought I said the same :) | 16:12 |
wiking | i.e. cluster centre frequency | 16:12 |
wiking | not really :P | 16:12 |
wiking | but yeah after you have this TF vector | 16:12 |
blackburn | what is the difference? | 16:12 |
wiking | you basically just take the labeling for that tf vector | 16:12 |
wiking | and learn by that the classifier | 16:12 |
blackburn | so feature vectors contains rates of various words? | 16:13 |
blackburn | I feel stupid :D | 16:13 |
blackburn | is a clustering a mandatory step? | 16:14 |
blackburn | I mean it could be 'misclustered' :) | 16:15 |
wiking | well | 16:26 |
wiking | the problem is that otherwise you have just too much different valued sift features | 16:26 |
wiking | so this is why you create a 'dictionary' | 16:26 |
wiking | with clustering | 16:26 |
blackburn | okay it looks clear now but I can't see my mistake | 16:27 |
blackburn | where I did go wrong? | 16:27 |
wiking | blackburn: and then you have e.g. vector [#eyes #cars #bikes] | 16:28 |
wiking | the clustered descriptors are not telling you anything about the label | 16:28 |
blackburn | [#rate for cluster 1, ... 2, ...] | 16:29 |
blackburn | right? | 16:29 |
wiking | yep | 16:29 |
wiking | so you would have something like | 16:29 |
blackburn | [0, 3, 5]? | 16:29 |
blackburn | :) | 16:29 |
wiking | car: [0,3,5] | 16:29 |
wiking | if you have a dictionary the size of 3, and this is a dense vector... | 16:30 |
blackburn | is there any weighting here? | 16:30 |
blackburn | could it be 1.243? | 16:30 |
wiking | well it depends more on your clustering method | 16:30 |
wiking | because you could use | 16:30 |
wiking | fuzzy clustering | 16:30 |
wiking | or even do some GMM shit | 16:30 |
wiking | so it really depends on your approach | 16:30 |
blackburn | hah I see | 16:30 |
blackburn | sounds like pretty instable, but working method | 16:31 |
wiking | well it works very well | 16:31 |
wiking | currently this gives you the best results | 16:31 |
blackburn | but it doesn't fit my task anyway :) | 16:31 |
wiking | like on a caltech101 benchmark dataset | 16:31 |
blackburn | wiking: ok thanks for your explanation | 16:59 |
wiking | nw | 16:59 |
blackburn | I'll merge your stuff once I get myself to home:) | 16:59 |
wiking | cool thnx | 16:59 |
wiking | maybe till then i get you a new/better kernel | 17:00 |
wiking | but then again it seems the one i thought would be good is not that good | 17:00 |
blackburn | I can't wait to see results with JS | 17:02 |
blackburn | :) | 17:02 |
blackburn | heading to gym right now | 17:09 |
blackburn | sse ya | 17:09 |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has quit [Quit: Page closed] | 17:09 | |
-!- blackburn [~qdrgsm@188.168.3.68] has joined #shogun | 20:17 | |
blackburn | sonney2k: helloo! | 20:19 |
wiking | he's not so sure about my improvement for JS kernel | 20:32 |
wiking | :P | 20:32 |
blackburn | yeah I find it strange | 20:34 |
wiking | :>> | 20:49 |
wiking | anyhow | 20:50 |
wiking | where should i send things about gsco? | 20:50 |
wiking | gsoc? | 20:50 |
blackburn | send what? | 20:59 |
wiking | well about this year | 21:02 |
wiking | if i want to do soemthing | 21:02 |
wiking | like the l-ssvm | 21:02 |
blackburn | you would have to wait when shogun is accepted (or not) :) | 21:05 |
wiking | yeah i know | 21:07 |
wiking | just that i have so much other stupid things to do | 21:08 |
wiking | in case it's accepted | 21:08 |
wiking | i want to have the whole thingy ready by then :P | 21:08 |
wiking | i mean proposition etc | 21:08 |
wiking | nyaah man | 21:08 |
wiking | the JR is shit | 21:08 |
wiking | :P | 21:08 |
wiking | but now i'm trying a new one | 21:08 |
wiking | JT :DD | 21:08 |
blackburn | man you totally hanged on J* things :) | 21:09 |
wiking | yeah | 21:10 |
wiking | because it's a great thingy | 21:10 |
wiking | i mean Jensen divergence | 21:10 |
wiking | it's really good in finding out the similarity between two probability distribution | 21:11 |
wiking | so yeah of course you want that in CV :P | 21:11 |
wiking | maaan this is fucking hilarious: http://third-bit.com/blog/archives/4431.html | 21:12 |
blackburn | ha??? | 21:13 |
blackburn | hah | 21:13 |
-!- blackburn [~qdrgsm@188.168.3.68] has quit [Ping timeout: 276 seconds] | 21:40 | |
-!- naywhayare [~ryan@spoon.lugatgt.org] has quit [Remote host closed the connection] | 21:56 | |
-!- blackburn [~qdrgsm@188.168.4.119] has joined #shogun | 22:33 | |
-!- blackburn [~qdrgsm@188.168.4.119] has left #shogun [] | 22:50 | |
-!- blackburn [~qdrgsm@188.168.4.209] has joined #shogun | 22:58 | |
wiking | blackburn: yo | 23:40 |
blackburn | yes? | 23:40 |
wiking | ehehhe no idea | 23:43 |
--- Log closed Tue Feb 14 00:00:19 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!