--- Log opened Fri Apr 01 00:00:36 2011 | ||
@sonney2k | serialhex, I am listening... | 00:02 |
---|---|---|
* sonney2k is just returning from the fsfe 10 years celebration | 00:02 | |
blackburn | sonney2k: hi, I said to you last week about a little modification of kNN (I'm doing it to get familiar with 'internals' of shogun) | 00:05 |
blackburn | my idea is introducing a 'type' of classifier: if k=1, so we don't need to sort - and classification is very-very fast | 00:06 |
blackburn | same thing with distance weighting of examples | 00:06 |
blackburn | what's your opinion on that? | 00:06 |
@sonney2k | sounds valid | 00:06 |
blackburn | should it be a different classifier, sth like 'generalized kNN'? | 00:07 |
blackburn | *I know of being kNN not so useful, but It seems like a good way to learn about shogun development* :) | 00:08 |
serialhex | sonney2k: so i was planning on doing the ruby bindings for GSoC, and i had a different question then... i have a better question now :P | 00:15 |
-!- blackburn1 [~qdrgsm@188.168.4.175] has joined #shogun | 00:16 | |
-!- blackburn [~qdrgsm@188.168.4.22] has quit [Ping timeout: 240 seconds] | 00:16 | |
@sonney2k | blackburn1, , I would introduce different train functions one for k =1 and one for general k. But I would always include an array with example weighting | 00:17 |
blackburn1 | but there is no difference when training | 00:18 |
blackburn1 | sonney2k: btw, sorry, disconnected, it is the only answer of you? | 00:18 |
@sonney2k | blackburn1, yes | 00:18 |
blackburn1 | okay | 00:18 |
serialhex | sonney2k: i've been talking to a few people on #ruby-lang and they dont recommend SWIG bindings for various reasons... is there any specific reason i *should* or *would have to* use swig for that project? | 00:18 |
@sonney2k | blackburn1, sorry applying | 00:18 |
@sonney2k | serialhex, well then don't use ruby with shogun :) | 00:20 |
@sonney2k | serialhex, but python :) swig bindings for python are pretty good... | 00:20 |
serialhex | sonney2k: no, what i was saying was, i *want* to do the project, i was just wondering if theres a specific reason you use swig? | 00:21 |
serialhex | sonney2k: i mean from what i can tell there are a few nice tools within ruby that makes it easy to do all that... i'll have to explore a bit more but if it's gonna be frowned upon in my application then i'll have to get myself set up differently | 00:23 |
blackburn1 | sonney2k: so, should I implement something like 'classify_with_NN(...)'? | 00:23 |
blackburn1 | and with no respect to this->k, it should classify for k=1, with no sorting, am I right? | 00:24 |
@sonney2k | serialhex, the reason is that we can easily (if swig is bugfree for $LANG) support many different languages | 00:24 |
@sonney2k | blackburn1, exactly | 00:24 |
blackburn1 | sonney2k: and another one, 'classify_distance_weighted()', but there is a problem, how to describe distance weight function? | 00:25 |
serialhex | sonney2k: hmm... ok | 00:25 |
@sonney2k | classify_one_nn() / classify_k_nn() would make sense / but always have an array with example importance weights | 00:25 |
@sonney2k | serialhex, so all we have to do is write the C++ code - no more work needed to write interface code | 00:26 |
serialhex | sonney2k: that makes sense... | 00:26 |
blackburn1 | eh, not surely understand it, what kind of importance weights? as I see, there is no difference between examples: classes[train_lab[j]]++; | 00:27 |
@sonney2k | blackburn1, I thought you wanted to introduce that? | 00:27 |
-!- aifargonos [~aifargono@46.18.27.35] has quit [Ping timeout: 248 seconds] | 00:27 | |
blackburn1 | when using weights by distance we should e.g. classes[train_lab[j]] += inverse_distance_squared(dists[j]); | 00:27 |
@sonney2k | ahh I see | 00:28 |
@sonney2k | ^^ blackburn1 | 00:28 |
blackburn1 | not importance weights like example one is more important than example two, but example nearest to classifying object is more important than farest, etc | 00:29 |
-!- blackburn1 is now known as blackburn | 00:29 | |
blackburn | it could be even a rank function, like classes[train_lab[j]] += pow(0.5,j); | 00:31 |
blackburn | how does it seems to you? | 00:31 |
blackburn | btw, sonney2k, that did you mean saying '^^'? :) | 00:33 |
@sonney2k | blackburn, yes when I said I see I meant I understand. Before I was thinking that you might have some weights over examples - like some kind of certainty score that an example can be trusted more. | 00:35 |
@sonney2k | sorry but I have to leave for now. | 00:35 |
@sonney2k | cu tomorrow | 00:35 |
blackburn | sorry for a little misunderstading | 00:35 |
blackburn | see you, will implement that idea | 00:36 |
@sonney2k | bye | 00:36 |
blackburn | misunderstanding* | 00:36 |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has joined #shogun | 00:44 | |
-!- mode/#shogun [+o bettyboo] by ChanServ | 00:44 | |
-!- blackburn [~qdrgsm@188.168.4.175] has quit [Quit: Leaving.] | 00:45 | |
@knrrrd | hmmm, it's getting late | 00:46 |
@knrrrd | bye guys, i am also leaving. | 00:48 |
-!- shelhamer [~shelhamer@AMontsouris-152-1-50-9.w83-202.abo.wanadoo.fr] has joined #shogun | 01:00 | |
-!- sploving [~root@124.16.139.196] has joined #shogun | 01:48 | |
-!- shelhamer [~shelhamer@AMontsouris-152-1-50-9.w83-202.abo.wanadoo.fr] has quit [Quit: Get MacIrssi - http://www.sysctl.co.uk/projects/macirssi/] | 02:25 | |
-!- sploving [~root@124.16.139.196] has quit [Ping timeout: 240 seconds] | 03:23 | |
-!- sploving [~root@124.16.139.196] has joined #shogun | 03:27 | |
-!- Ziyuan [~Ziyuan@116.23.213.66] has quit [] | 03:34 | |
-!- sploving [~root@124.16.139.196] has quit [Remote host closed the connection] | 03:39 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 03:48 | |
-!- sonney2k [~shogun@87.118.92.43] has quit [Ping timeout: 240 seconds] | 04:10 | |
-!- sonney2k [~shogun@87.118.92.43] has joined #shogun | 04:11 | |
-!- mode/#shogun [+o sonney2k] by ChanServ | 04:11 | |
@bettyboo | sonney2k | 04:11 |
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 246 seconds] | 04:27 | |
-!- Ryaether [~Ryaether@50-80-170-245.client.mchsi.com] has joined #shogun | 04:31 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 04:41 | |
-!- sonney2k [~shogun@87.118.92.43] has quit [Ping timeout: 240 seconds] | 04:48 | |
-!- sonney2k [~shogun@87.118.92.43] has joined #shogun | 04:49 | |
-!- mode/#shogun [+o sonney2k] by ChanServ | 04:49 | |
-!- dvevre [b49531e3@gateway/web/freenode/ip.180.149.49.227] has quit [Ping timeout: 252 seconds] | 05:19 | |
-!- shayan [~shayan@27.107.252.137] has joined #shogun | 05:23 | |
-!- shayan [~shayan@27.107.252.137] has quit [Quit: Leaving] | 05:29 | |
-!- shayan_ [~shayan@27.107.252.137] has joined #shogun | 05:29 | |
-!- shayan_ is now known as shayan | 05:30 | |
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 250 seconds] | 05:45 | |
-!- shayan [~shayan@27.107.252.137] has quit [Quit: Leaving] | 06:44 | |
-!- sploving [~root@124.16.139.196] has joined #shogun | 06:50 | |
-!- siddharth__ [~siddharth@117.211.88.150] has joined #shogun | 07:19 | |
-!- siddharth_ [~siddharth@117.211.88.150] has quit [Ping timeout: 276 seconds] | 07:23 | |
-!- siddharth__ [~siddharth@117.211.88.150] has quit [Ping timeout: 276 seconds] | 07:31 | |
-!- siddharth__ [~siddharth@117.211.88.150] has joined #shogun | 07:35 | |
-!- sploving [~root@124.16.139.196] has quit [Quit: Leaving.] | 07:35 | |
-!- siddharth_ [~siddharth@117.211.88.150] has joined #shogun | 07:37 | |
-!- sploving [~root@124.16.139.196] has joined #shogun | 07:40 | |
-!- siddharth__ [~siddharth@117.211.88.150] has quit [Ping timeout: 260 seconds] | 07:41 | |
@knrrrd | back | 08:00 |
-!- aifargonos [~aifargono@46.18.27.35] has joined #shogun | 08:05 | |
-!- seviyor [c1e20418@gateway/web/freenode/ip.193.226.4.24] has quit [Ping timeout: 252 seconds] | 08:22 | |
-!- aifargonos [~aifargono@46.18.27.35] has quit [Ping timeout: 250 seconds] | 08:27 | |
-!- aifargonos [~aifargono@193.206.186.107] has joined #shogun | 08:44 | |
@knrrrd | hmmm | 08:44 |
-!- jabbok [56220ef5@gateway/web/freenode/ip.86.34.14.245] has joined #shogun | 08:57 | |
-!- sploving [~root@124.16.139.196] has quit [Ping timeout: 240 seconds] | 08:58 | |
-!- sploving [~root@124.16.139.196] has joined #shogun | 09:07 | |
-!- siddharth__ [~siddharth@117.211.88.150] has joined #shogun | 09:30 | |
-!- siddharth_ [~siddharth@117.211.88.150] has quit [Ping timeout: 248 seconds] | 09:33 | |
-!- siddharth__ [~siddharth@117.211.88.150] has quit [Quit: Leaving] | 09:46 | |
-!- sploving [~root@124.16.139.196] has quit [Ping timeout: 240 seconds] | 09:46 | |
-!- sploving [~root@124.16.139.196] has joined #shogun | 09:48 | |
-!- siddharth_k [~siddharth@117.211.88.150] has joined #shogun | 09:53 | |
-!- AdaHopper [~Adium@vera2g59-166.wi-fi.upv.es] has joined #shogun | 09:57 | |
AdaHopper | good morning | 09:57 |
-!- Ryaether [~Ryaether@50-80-170-245.client.mchsi.com] has left #shogun [] | 10:03 | |
* sonney2k is also back... | 10:09 | |
* sonney2k just replaced his defective router | 10:09 | |
@knrrrd | hi sonney2k | 10:17 |
* sonney2k it feels soo good to be online :) | 10:17 | |
-!- Ziyuan [~Ziyuan@116.23.213.66] has joined #shogun | 10:18 | |
* Ziyuan hello | 10:18 | |
@knrrrd | when did you router failed? today? | 10:19 |
-!- aifargonos [~aifargono@193.206.186.107] has quit [Ping timeout: 250 seconds] | 10:19 | |
Ziyuan | So we already have CSparseKernel and CSparseFeatures | 10:21 |
@sonney2k | knrrrd, 3 days ago... hansenet was quick to send a new one (took 1.5 days) ... | 10:25 |
@knrrrd | not bad. | 10:25 |
@sonney2k | Ziyuan, actually sparse kernel is not really used - dot features are a more general framework that replaced them | 10:26 |
@sonney2k | knrrrd, in particular considering that it took 14hrs until I finally connected it... | 10:26 |
-!- aifargonos [~aifargono@193.206.186.107] has joined #shogun | 10:27 | |
@sonney2k | even b/g/n wlan | 10:27 |
@sonney2k | lets hope it proves stable... | 10:27 |
@knrrrd | is it a common brand, such as fritzbox? | 10:28 |
@sonney2k | no something called alice wlan 1421 or so | 10:32 |
@sonney2k | the interface is really for dummies... difficult to use for me (could hardly find the port forwarding...) | 10:32 |
@knrrrd | hehe | 10:32 |
Ziyuan | Sonney, but aren't we supposed to implement the "Sparse Kernel Feature Analysis" this time? | 10:33 |
@sonney2k | what is interesting though is that it has a pppoe-passthrough. so my linksys router behind can do the real job ;-) | 10:34 |
@sonney2k | Ziyuan, the name is similar yes but it is something different - knrrrd will explain I hope :) | 10:34 |
Ziyuan | OK~ | 10:37 |
@knrrrd | Ziyuan: sparse kernel feature analysis is closely related to kpca | 10:37 |
Ziyuan | Yes, I've read the two papers | 10:38 |
@knrrrd | in kpca, you have a set of vectors and your learn a new basis in the kernel-induced feature space. | 10:38 |
@knrrrd | each vector of this basis is a linear combination of all training examples | 10:38 |
@knrrrd | so if you have n training examples and look at the first 3 basis vectors, you are dealing with 3 * n examples | 10:39 |
@knrrrd | this is different with kfa, here the basis vectors are sparse. | 10:39 |
@knrrrd | that is, a basis vector is only associated with ONE example of the training data | 10:39 |
@knrrrd | if we are looking at the 3 first basis vectors, we are only dealing with 3 examples and thus can be very fast when processing larger data sets | 10:39 |
Ziyuan | Yes | 10:40 |
Ziyuan | I thought that we are to implement it via those two classes. | 10:40 |
@knrrrd | so the implementation of kfa does not requires any special kernel or feature representation | 10:40 |
@knrrrd | i think it would be best to put everything in a new class. | 10:40 |
@knrrrd | in principle it should work with any kernel object and feature vector (by virtue of kernels) | 10:41 |
@knrrrd | i guess sonney2k knows more on how feature vectors and kernel objects match in Shogun | 10:41 |
Ziyuan | Thanks for your detailed explanation :) | 10:41 |
* knrrrd brb | 10:42 | |
Ziyuan | Before asking questions, I'll read the source code of that part at first. | 10:47 |
@knrrrd | sonney2k: are you still alive? | 10:48 |
@sonney2k | I am | 10:48 |
@sonney2k | just debugging something on mldata.org | 10:48 |
@knrrrd | I noticed that all kernel-based methods derive from CKernelMachine | 10:49 |
@sonney2k | yes | 10:49 |
@knrrrd | would it make sense to add KFA there? i am not sure. CKernelMachine rather looks like a generic SVM wrapper | 10:49 |
@sonney2k | knrrrd, I am not sure what KFA does | 10:50 |
@sonney2k | I mean what is the input and what is the output? | 10:50 |
@knrrrd | sonney2k: it's a sparse version of kpca | 10:50 |
@knrrrd | output: an new basis (array of array of vectors) ;) | 10:50 |
@sonney2k | knrrrd, then I would implement it als a preprocessor, have a look at KernelPCACut.h | 10:51 |
@sonney2k | that is what is doing kpca in shogun | 10:51 |
@knrrrd | looks very good | 10:52 |
@knrrrd | KernelPCACut.h is still under construction ;) | 10:53 |
@knrrrd | most of the code is commented out | 10:54 |
@sonney2k | knrrrd, yeah true but the kpca itself is done already just not the projection / applying things to features | 10:55 |
@knrrrd | yep i see. | 10:55 |
@knrrrd | so CSimplePreProc is a good starting point for KFA | 10:56 |
@sonney2k | knrrrd, I think so - there is one catch though. You might want to support features other than just real-valued vectors later | 10:57 |
@knrrrd | yes. I am wondering what the type 'ST' means? | 10:57 |
@sonney2k | simple type, e.g. int32_t or float64_t | 10:58 |
@knrrrd | ah ok. then we better derive from CPreProc | 10:59 |
@knrrrd | maybe something like CKernelPreProc | 10:59 |
@sonney2k | knrrrd, so it will work with any kind of inputs - strings etc? | 11:00 |
@knrrrd | yes | 11:00 |
@knrrrd | input set of strings, output n-dimensional representation | 11:00 |
@knrrrd | just like kpca | 11:00 |
@knrrrd | i think it will be straighforward. however, we can only support CFeatures* as input and need to get rid off apply_to_feature_vector | 11:02 |
@sonney2k | in this case preprocessors are not ideal - they have the (implicit) assumption that the feature type does not change. so I would recommend to introduce KFAFeatures that get as an argument the kernel | 11:02 |
@sonney2k | ... and then produce the n-dimensional feature vectors later | 11:02 |
@knrrrd | ok. would have been my next question, how to change the feature type during conversion | 11:03 |
@sonney2k | so these coudl be derived from CSimpleFeatures<float64_t> and all you have to do is to store the feature_matrix | 11:03 |
@sonney2k | so if you manage doing kernel -> feature matrix you are done | 11:03 |
@knrrrd | there is still another catch. you can do kfa/kpca on one set of objects and then apply it on another | 11:03 |
@knrrrd | i think features can not handle this case. | 11:03 |
@knrrrd | so eventually, it is a CKernelMachine ;) | 11:04 |
@sonney2k | knrrrd, could be - I guess you have to override the 'classify' functions inside kernel machine then - (classify is misleading now - should be 'apply' then) | 11:05 |
@knrrrd | yes | 11:05 |
@knrrrd | similar to the clustering algorithms | 11:07 |
@knrrrd | okay. the more i think about it. KPCA and KFA can been as a regression with multiple outputs. so it's more a kernel machine | 11:09 |
@knrrrd | ^been^been seen^ | 11:09 |
Ziyuan | And does KFA need to be regularized? | 11:12 |
@knrrrd | no | 11:13 |
-!- skydiver [c255a037@gateway/web/freenode/ip.194.85.160.55] has joined #shogun | 11:13 | |
@knrrrd | its sensitive to outliers as kpca | 11:13 |
Ziyuan | Hmm... | 11:13 |
@knrrrd | dim reduction can not really overfit. there are unsupervised | 11:16 |
Ziyuan | Yep, I've got it. | 11:19 |
@knrrrd | anyone else here interested in kfa? | 11:19 |
@knrrrd | i am just curious | 11:19 |
Ziyuan | me, me | 11:19 |
@knrrrd | ;) | 11:19 |
skydiver | hi Konrad | 11:20 |
skydiver | i'm interested | 11:20 |
@knrrrd | okay. any questions? | 11:21 |
skydiver | do you know can kfa be applied somehow to dna data? | 11:24 |
@knrrrd | skydiver: it can. | 11:25 |
@knrrrd | if you take a string kernel, you can compute a new basis for string data via kfa | 11:25 |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has left #shogun [] | 11:26 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has joined #shogun | 11:27 | |
-!- mode/#shogun [+o bettyboo] by ChanServ | 11:27 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has left #shogun [] | 11:30 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has joined #shogun | 11:30 | |
-!- mode/#shogun [+o bettyboo] by ChanServ | 11:30 | |
@knrrrd | something is going on, here. ;) | 11:30 |
@knrrrd | so i am thinking about applying kfa to string data | 11:31 |
skydiver | how cutoff parameter (delta) can be selected for ACKFA? | 11:31 |
@knrrrd | it would allows us to have a set of strings as input and get a set of vectors as output | 11:31 |
Ziyuan | So where would the new classes be placed according to the current class heirarchy? | 11:31 |
@knrrrd | skydiver: i don't know. i think it depends on the data and we have to test | 11:31 |
@knrrrd | Ziyuan: i have talking with sonney2k on this issue. The best would be to take CKernelMachine as a super-class, altough KFA is not a classifier | 11:32 |
Ziyuan | Ah, I saw it. | 11:33 |
@knrrrd | i think it will not be too difficult. but also not extremely easy ;) | 11:36 |
skydiver | @knrrrd if we have for example dataset 200000 features it would be great to reduce feature space size - i think AKFA can be applied | 11:36 |
@knrrrd | skydiver: exactly. | 11:36 |
@knrrrd | skydiver: especially if the data is not vectorial but discrete, such as trees and strings | 11:37 |
@knrrrd | skydiver: then we can compute vectors from the data by using the basis vectors of kfa | 11:37 |
skydiver | @knrrrd as i see O(l * n^2) where n is size of input patterns, does size of feature space affect on complexity? | 11:38 |
@knrrrd | skydiver: yes. the size of the feature space is implicitly captured in the kernel computation | 11:39 |
-!- AdaHopper [~Adium@vera2g59-166.wi-fi.upv.es] has left #shogun [] | 11:39 | |
@knrrrd | but don't need to care for this. Shogun has several kernel functions available which are very efficient | 11:39 |
skydiver | @knrrrd computation of kernel function of two vectors from R^d costs O(d) | 11:42 |
@knrrrd | skydiver: depends | 11:42 |
skydiver | @knrrrd so i'd like to can it have affect on O( l * n ^ 2) in AKFA other than a constant | 11:42 |
skydiver | *i'd like to know | 11:43 |
@knrrrd | skydiver: i don't think so. | 11:43 |
skydiver | @knrrrd can you give example of trees and strings input data with lot of features? i haven't used string kernel and want to get some point | 11:45 |
@knrrrd | skydiver: basically you don't define the feature explicitly but implicity by using a kernel | 11:46 |
@knrrrd | skydiver: just have a look at papers on string kernels and tree kernels | 11:46 |
skydiver | ok | 11:46 |
skydiver | now i see | 11:46 |
@knrrrd | skydiver: often these kernel operate in vector space with millions or infinite many dimensions | 11:47 |
@knrrrd | skydiver: but they do this efficient and not in O(d) of course ;) | 11:47 |
@bettyboo | :> | 11:47 |
-!- ziyuang [~Ziyuan@218.104.71.166] has joined #shogun | 11:48 | |
@knrrrd | kpca has already been used for such applications | 11:49 |
@knrrrd | however, the basis vector of kpca are dense. | 11:50 |
@knrrrd | if you have a training set with millions of instances, each basis vector is a linear combination of these instances | 11:50 |
@knrrrd | -> bad and slow | 11:50 |
@knrrrd | for kfa, we only have one training instance per basis vector | 11:50 |
-!- Ziyuan [~Ziyuan@116.23.213.66] has quit [Ping timeout: 246 seconds] | 11:51 | |
@knrrrd | that is, we can compute an n-dimensional vector from a set of objects by comparing each object only to n instances learned by kfa | 11:51 |
@knrrrd | anything else related to kfa? | 11:53 |
@knrrrd | i need to leave in about 10 minutes | 11:53 |
@knrrrd | btw. gsoc of shogun has been twittered by PASCAL. Cool ;) | 11:54 |
@bettyboo | funny! | 11:54 |
-!- ziyuang [~Ziyuan@218.104.71.166] has quit [] | 11:56 | |
-!- Ziyuan [~Ziyuan@218.104.71.166] has joined #shogun | 11:57 | |
skydiver | @knrrrd are there any kfa methods other than kpca, skfa and akfa? | 11:57 |
@knrrrd | http://twitter.com/#!/PASCALNetwork | 11:57 |
@knrrrd | skydiver: i guess there are many variants of kpca and spars kpca with different scope and applications | 11:58 |
@knrrrd | skydiver: most however try to improve kpca in terms of robustness and not run-time performance | 11:58 |
@knrrrd | skydiver: you are andrew, right? | 11:59 |
skydiver | yes | 12:00 |
@knrrrd | Ziyuan: you are ziyuan. that wasn't too hard. | 12:00 |
skydiver | @knrrrd i'm very interested to use kfa for my current work with dna data - thanks for some points | 12:01 |
Ziyuan | Yes, I am Ziyuan Lin~ | 12:01 |
Ziyuan | There is "Kernel Correlation Feature Analysis" | 12:01 |
@knrrrd | ah ok. i guess there is a lot of "Kernel * * Analysis" | 12:01 |
@knrrrd | kica, kcca and so on | 12:02 |
@knrrrd | okay. nice talking to you. | 12:04 |
@knrrrd | bye | 12:04 |
skydiver | bye | 12:05 |
-!- skydiver [c255a037@gateway/web/freenode/ip.194.85.160.55] has quit [Quit: Page closed] | 12:05 | |
Ziyuan | byu | 12:06 |
Ziyuan | ...bye | 12:06 |
Ziyuan | Why the "ZeroMeanCenterKernelNormalizer" is implemented without the use of "center_matrix" in Mathematics.h? | 12:10 |
-!- aifargonos [~aifargono@193.206.186.107] has quit [Ping timeout: 276 seconds] | 12:32 | |
CIA-30 | shogun: Christian Widmer boost_serialization * rae4a959 / (4 files in 3 dirs): - http://bit.ly/fZFVEk | 12:51 |
CIA-30 | shogun: Jonas Behr galaxy * ra73a80f / (6 files in 3 dirs): changes of structure interface for usage in galaxy framework - http://bit.ly/g3qfHL | 12:51 |
CIA-30 | shogun: Jonas Behr structure * r400aa70 / (3 files in 3 dirs): Debug code removed; possibly large array not on stack any more - http://bit.ly/dIHLTq | 12:51 |
CIA-30 | shogun: Christian Widmer boost_serialization * rfb8bd16 / (2 files): fixed problem with alphabet save_construct_data - http://bit.ly/hj72my | 12:51 |
CIA-30 | shogun: Soeren Sonnenburg master * rcbef0fb / (3 files in 2 dirs): | 12:51 |
CIA-30 | shogun: Implement snappy compression support (http://code.google.com/p/snappy/). | 12:51 |
CIA-30 | shogun: This code is still experimental and pending benchmarks. (+221 more commits...) - http://bit.ly/hMoOZv | 12:51 |
-!- ziyuang [~Ziyuan@218.104.71.166] has joined #shogun | 12:53 | |
-!- Ziyuan [~Ziyuan@218.104.71.166] has quit [Ping timeout: 260 seconds] | 12:53 | |
-!- ziyuang [~Ziyuan@218.104.71.166] has quit [Client Quit] | 12:57 | |
-!- Ziyuan [~Ziyuan@218.104.71.166] has joined #shogun | 12:58 | |
CIA-30 | shogun: Soeren Sonnenburg master * rbbf6afa / src/configure : | 13:04 |
CIA-30 | shogun: add flags to support lua,ruby and java via | 13:04 |
CIA-30 | shogun: ./configure --interfaces=libshogun,ruby_modular,lua_modular,java_modular | 13:04 |
CIA-30 | shogun: basic autodetection should work for java and ruby paths. Autodetection | 13:04 |
CIA-30 | shogun: for lua should be perfect. - http://bit.ly/gLxDwh | 13:04 |
CIA-30 | shogun: Soeren Sonnenburg master * rad96ba3 / (src/ChangeLog src/.authors): remove d.l.s email address from changelog - http://bit.ly/fKE3ez | 13:04 |
CIA-30 | shogun: Soeren Sonnenburg master * rcbef0fb / (3 files in 2 dirs): | 13:04 |
CIA-30 | shogun: Implement snappy compression support (http://code.google.com/p/snappy/). | 13:04 |
CIA-30 | shogun: This code is still experimental and pending benchmarks. - http://bit.ly/hMoOZv | 13:04 |
@sonney2k | Ziyuan, check if it is the same and if it is - submit a patch :) | 13:05 |
@sonney2k | Ziyuan, reason is probably that it didn't exist in lib/Mathematics.h at that time | 13:06 |
* sonney2k has just recreated the shogun repository for the sake of keeping peace on earth | 13:07 | |
@knrrrd | bettyboo: What's up? Anything new | 13:10 |
@bettyboo | knrrrd: What's up? Anything new | 13:10 |
@knrrrd | bettyboo: ? | 13:10 |
@bettyboo | knrrrd: i am already feeling like a scientist here | 13:10 |
@knrrrd | :) | 13:10 |
Ziyuan | Ok | 13:31 |
-!- ziyuang [~Ziyuan@116.23.213.66] has joined #shogun | 13:40 | |
-!- Ziyuan [~Ziyuan@218.104.71.166] has quit [Ping timeout: 276 seconds] | 13:42 | |
-!- siddharth_k [~siddharth@117.211.88.150] has quit [Ping timeout: 240 seconds] | 13:43 | |
-!- aifargonos [~aifargono@193.206.186.107] has joined #shogun | 13:45 | |
-!- aifargonos [~aifargono@193.206.186.107] has quit [Ping timeout: 260 seconds] | 13:52 | |
@knrrrd | man, tcl is one annoying programming language | 13:56 |
@knrrrd | java s.length(), python len(s), tcl [string length s] | 13:57 |
-!- siddharth_k [~siddharth@117.211.88.150] has joined #shogun | 13:58 | |
-!- ziyuang [~Ziyuan@116.23.213.66] has quit [] | 13:59 | |
@knrrrd | what a bloat: [string match pattern txt] | 13:59 |
-!- Ziyuan [~Ziyuan@116.23.213.66] has joined #shogun | 13:59 | |
@knrrrd | !seen sonney2k | 14:00 |
@bettyboo | knrrrd, sonney2k is right here! | 14:00 |
@knrrrd | but idle. | 14:00 |
@sonney2k | knrrrd, all my screen is flashing ;-) | 14:03 |
@bettyboo | ;D | 14:03 |
@knrrrd | hehe, that's good | 14:04 |
-!- aifargonos [~aifargono@193.206.186.107] has joined #shogun | 14:13 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has left #shogun [] | 14:19 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has joined #shogun | 14:19 | |
-!- mode/#shogun [+o bettyboo] by ChanServ | 14:20 | |
-!- sploving [~root@124.16.139.196] has quit [Ping timeout: 240 seconds] | 14:35 | |
@knrrrd | it took some time, but betty is fine now. | 14:44 |
@bettyboo | knrrrd: 3 days ago... hansenet was quick to send a new one (took 1.5 days) ... | 14:44 |
@knrrrd | i know | 14:44 |
@knrrrd | bettyboo, does shogun have a python interface? | 14:50 |
@bettyboo | knrrrd: does shogun have a python interface? | 14:50 |
@knrrrd | i getting nuts | 14:50 |
-!- Tanmoy [75d35896@gateway/web/freenode/ip.117.211.88.150] has joined #shogun | 14:51 | |
-!- epps [~epps@84.18.157.200] has joined #shogun | 14:51 | |
-!- epps [~epps@84.18.157.200] has quit [Changing host] | 14:51 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 14:51 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has quit [Quit: time to recycle] | 14:54 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has joined #shogun | 14:57 | |
-!- mode/#shogun [+o bettyboo] by ChanServ | 14:57 | |
Tanmoy | @sonney2k is there no method described for Vovariance matrix | 15:08 |
Tanmoy | sorrie i mean Covariance matrix | 15:08 |
@sonney2k | Tanmoy, well PCA obviously needs it so the code is probably hidden in there (and should probably be moved into Mathematics) | 15:09 |
Tanmoy | Kernel PCA too | 15:10 |
Tanmoy | @sonney2k there is not mthods for regularisation for ill posed problems | 15:10 |
@sonney2k | that's true | 15:10 |
Tanmoy | that way computations on the Covariance matrix could be reduced | 15:10 |
@sonney2k | yes | 15:10 |
Tanmoy | even if its a dot prdocut | 15:11 |
Tanmoy | @sonney2k so what preprocessing method is actually there apart from a Kernel Matrix because i anyway need to recompute that | 15:16 |
-!- epps [~epps@unaffiliated/epps] has quit [Quit: Leaving] | 15:20 | |
@knrrrd | bettyboo: do you know any further code for pca and covariance? | 15:25 |
@bettyboo | knrrrd: yeah. a lot of work and not many results. so far | 15:25 |
-!- epps [~epps@84.18.157.200] has joined #shogun | 15:27 | |
-!- epps [~epps@84.18.157.200] has quit [Changing host] | 15:27 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 15:27 | |
-!- aifargonos [~aifargono@193.206.186.107] has quit [Ping timeout: 264 seconds] | 15:28 | |
-!- aifargonos [~aifargono@46.18.27.35] has joined #shogun | 15:38 | |
-!- aifargonos [~aifargono@46.18.27.35] has quit [Ping timeout: 240 seconds] | 15:45 | |
-!- aifargonos [~aifargono@46.18.27.35] has joined #shogun | 15:46 | |
@knrrrd | http://www.bierpad.nl/ | 15:47 |
-!- skydiver [4deac315@gateway/web/freenode/ip.77.234.195.21] has joined #shogun | 15:53 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has quit [Remote host closed the connection] | 16:00 | |
-!- bettyboo [~bettyboo@bane.ml.tu-berlin.de] has joined #shogun | 16:00 | |
-!- mode/#shogun [+o bettyboo] by ChanServ | 16:00 | |
-!- epps [~epps@unaffiliated/epps] has quit [Read error: Connection reset by peer] | 16:17 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 16:17 | |
@knrrrd | pretty cool host: @unaffiliated/epps | 16:18 |
epps | thanks :) | 16:19 |
@knrrrd | makes you look very unaffiliated | 16:19 |
@knrrrd | ;) | 16:20 |
@bettyboo | ;> | 16:20 |
-!- seviyor [c1e20418@gateway/web/freenode/ip.193.226.4.24] has joined #shogun | 16:21 | |
@sonney2k | Tanmoy, I don't understand your question... | 16:22 |
-!- rubic [75d35896@gateway/web/freenode/ip.117.211.88.150] has joined #shogun | 16:28 | |
-!- rubic [75d35896@gateway/web/freenode/ip.117.211.88.150] has quit [Ping timeout: 252 seconds] | 17:01 | |
siddharth_k | sonney2k, do we have to compile it again? | 17:29 |
@sonney2k | knrrrd, I've updated the website http://www.shogun-toolbox.org/ and http://www.shogun-toolbox.org/jmlr10/ ... comments welcome | 17:30 |
@sonney2k | siddharth_k, you have to git clone and compile again yes. | 17:30 |
@sonney2k | sorry about that | 17:30 |
siddharth_k | ok | 17:30 |
@knrrrd | okay i take a look. | 17:35 |
@knrrrd | betty: can you also go over the page and check. thx. | 17:35 |
@bettyboo | knrrrd: but you are interested in shogun? | 17:35 |
@knrrrd | bettyboo: yes but you could help here. seriously | 17:36 |
@bettyboo | knrrrd: seriously. shogun is great | 17:36 |
@knrrrd | hrhr | 17:37 |
@knrrrd | sonney2k: webpage looks good. though it is very wide now | 17:42 |
@knrrrd | i am not sure if this has been the case before you added gsoc | 17:42 |
@sonney2k | knrrrd, I don't recall either | 17:43 |
@knrrrd | we could ask betty, maybe she knows. | 17:44 |
@bettyboo | knrrrd: that way computations on the Covariance matrix could be reduced | 17:44 |
@knrrrd | ;) | 17:44 |
@bettyboo | knrrrd, ;D | 17:44 |
@knrrrd | ROFL:ROFL:LOL:ROFL:ROFL | 17:46 |
@knrrrd | | | 17:46 |
@knrrrd | L /--------- | 17:46 |
@knrrrd | LOL=== []\ | 17:46 |
@knrrrd | L \ \ | 17:46 |
@knrrrd | \_________\ | 17:46 |
@knrrrd | | | | 17:46 |
@knrrrd | anyway. | 17:46 |
@knrrrd | interesting post on google's new feature to translate pig latin: http://bit.ly/e3Cukz | 17:47 |
-!- blackburn [~qdrgsm@188.168.2.37] has joined #shogun | 18:05 | |
-!- seviyor [c1e20418@gateway/web/freenode/ip.193.226.4.24] has quit [Quit: Page closed] | 18:34 | |
Tanmoy | @sonney2k what i mean is that what preprocessing methods are there for Kernel PCA | 19:14 |
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 240 seconds] | 19:51 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 19:54 | |
@knrrrd | Almost ready for the weekend | 20:19 |
-!- dvevre [b49531e3@gateway/web/freenode/ip.180.149.49.227] has joined #shogun | 20:34 | |
-!- skydiver [4deac315@gateway/web/freenode/ip.77.234.195.21] has quit [Ping timeout: 252 seconds] | 20:43 | |
-!- Tanmoy [75d35896@gateway/web/freenode/ip.117.211.88.150] has quit [Quit: Page closed] | 20:49 | |
-!- seviyor [bc19cab5@gateway/web/freenode/ip.188.25.202.181] has joined #shogun | 21:43 | |
-!- alesis-novik [~alesis@188.74.87.84] has joined #shogun | 23:02 | |
-!- seviyor [bc19cab5@gateway/web/freenode/ip.188.25.202.181] has quit [Quit: Page closed] | 23:13 | |
@sonney2k | *lol* http://mail.google.com/mail/help/motion.html | 23:21 |
-!- epps [~epps@unaffiliated/epps] has quit [Ping timeout: 260 seconds] | 23:28 | |
alesis-novik | Have you seen the old school youtube or 3d xkcd? | 23:45 |
-!- skydiver [c255a037@gateway/web/freenode/ip.194.85.160.55] has joined #shogun | 23:53 | |
-!- epps [~epps@unaffiliated/epps] has joined #shogun | 23:56 | |
--- Log closed Sat Apr 02 00:00:36 2011 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!