--- Log opened Mon Apr 09 00:00:19 2012 | ||
PhilTillet | blackburn, I rebased all my commit cause I thought one would be clearer, was it a mistake? (it seems to me that the big commit made things more unclear than anything else :p) | 00:14 |
---|---|---|
blackburn | yeah probably no need to do that | 00:14 |
blackburn | however not so bad | 00:14 |
PhilTillet | blackburn, this year, the ViennaCL GSoC Idea is on solving the Eigenvalue Problem on GPU :p So in september it should be possible to integrate Hardware Accelerated dimension reduction into Shogun. | 00:18 |
blackburn | hmm nice | 00:18 |
blackburn | PhilTillet: are you applying for viennacl as well? | 00:19 |
PhilTillet | blackburn, no, I applied this year but to be honest was rejected (3 slots for 70 applications :D) | 00:19 |
PhilTillet | but I ended up working directly with Technical University of Vienna | 00:20 |
PhilTillet | to another "Summer of Code", but organized directly by their university | 00:20 |
blackburn | ah I see | 00:20 |
blackburn | you mean you applied last year? | 00:20 |
PhilTillet | (this is how I was able to write a paper etc..) | 00:21 |
blackburn | why didn't you applied this year then? | 00:21 |
PhilTillet | Cause I wanted to do machine learning | 00:21 |
PhilTillet | :p | 00:21 |
blackburn | I see | 00:21 |
PhilTillet | there was not a lot of C++/Machine Learning projects :p | 00:21 |
blackburn | one I guess | 00:22 |
PhilTillet | yes, did not find any other organization | 00:23 |
PhilTillet | well to a smaller extent there was also OpenCV, but their main focus is not Machine Learning | 00:25 |
PhilTillet | blackburn, yes, I mean I applied last year earlier (didn't notice the typo :p) | 00:31 |
PhilTillet | only applying for Shogun this year.. | 00:31 |
PhilTillet | I think I'm gonna listen to some Azis and then go to bed YaY | 00:32 |
blackburn | lol | 00:33 |
PhilTillet | last year this was the Rebecca Black period | 00:33 |
PhilTillet | http://www.youtube.com/watch?v=kfVsfOSbJY0 | 00:35 |
blackburn | oh that is too gay | 00:35 |
PhilTillet | she said in an interview that she was crazy about Justin Bieber :D | 00:36 |
blackburn | oh that is awful song | 00:37 |
blackburn | :D | 00:37 |
PhilTillet | The best is the lyrics :D | 00:37 |
blackburn | her voice is $#$#@ | 00:37 |
PhilTillet | "Yesterday was thursday, today is Friday, tomorrow is Saturday, and Sunday, comes afterwards" | 00:37 |
blackburn | we gonna have a ball today | 00:38 |
blackburn | wtf | 00:38 |
blackburn | are they talking about balls? | 00:38 |
blackburn | :D | 00:38 |
PhilTillet | I don't know :D I have to admit that it sounds like | 00:38 |
PhilTillet | Indeed the official lyrics are "we gonna have a ball today". | 00:39 |
blackburn | I am not any adult but this is way too 'teenagy' :D | 00:41 |
PhilTillet | :D | 00:42 |
-!- harshit_ [~harshit@182.68.113.64] has joined #shogun | 00:46 | |
harshit_ | blackburn: Hi ! | 00:51 |
blackburn | hi | 00:51 |
harshit_ | hey, I havn't done any thing in last 2 days, as in related to shogun | 00:51 |
harshit_ | coz not able to fully understand what needs to be done in LBP features | 00:52 |
blackburn | may be I can suggest anything else? | 00:52 |
harshit_ | blackburn: Could you suggest me somewhat easier task to do in mean while | 00:52 |
blackburn | aham | 00:52 |
harshit_ | Do you want to see s3vm in shogun ? | 00:53 |
harshit_ | or co-clustering | 00:54 |
harshit_ | or any other semi-supervised learning algo | 00:54 |
blackburn | why not if it is easier :) | 00:54 |
harshit_ | So first step would be to find a functional library.. | 00:55 |
harshit_ | Also I think I have a small experience with Classifiers so that would be easy .. | 00:56 |
harshit_ | blackburn: I'll mail you some of the implementations of S3VM tomorrow, plz have alook and tell me whether it would worth for me to spend time on it ! | 01:02 |
harshit_ | For now,Bye | 01:02 |
blackburn | ok sure | 01:02 |
blackburn | bye | 01:02 |
-!- harshit_ [~harshit@182.68.113.64] has quit [Quit: Leaving] | 01:04 | |
PhilTillet | blackburn, where are you from by the way? | 01:17 |
blackburn | blackburn: samara just like genix | 01:17 |
PhilTillet | oooh you are neighbours :p | 01:18 |
PhilTillet | and what time is it there?? | 01:18 |
blackburn | yes | 01:18 |
blackburn | 03-18 | 01:18 |
blackburn | I shall go to bed probably :D | 01:18 |
PhilTillet | :D | 01:18 |
PhilTillet | I think people are always more productive during night | 01:19 |
blackburn | yeah probably | 01:19 |
blackburn | however I still need to wake up in 6 hrs | 01:19 |
blackburn | and will be non-productive next morning :) | 01:19 |
PhilTillet | that's a point :D | 01:19 |
blackburn | okay see you | 01:20 |
PhilTillet | see you | 01:20 |
blackburn | it is night in france as well so good night :) | 01:20 |
-!- blackburn [~qdrgsm@83.234.54.186] has quit [Quit: Leaving.] | 01:20 | |
-!- PhilTillet [~Philippe@vir78-1-82-232-38-145.fbx.proxad.net] has quit [Quit: Leaving] | 01:25 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 02:24 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 06:41 | |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has joined #shogun | 08:46 | |
-!- V[i]ctor [~victor@host-176-100-246-254.masterbit.su] has joined #shogun | 09:46 | |
n4nd0 | blackburn: let's see if I succeed with the cover tree mission :P | 10:07 |
blackburn | n4nd0: oh that's painful | 10:09 |
blackburn | damn ich moechte shlafen | 10:10 |
n4nd0 | blackburn: are you tired? | 10:11 |
blackburn | yeah want to sleeep | 10:12 |
n4nd0 | blackburn: first I am going to test if JL's implementation is actually faster using their tests and comparing to our unbeatable quick sort :) | 10:12 |
blackburn | you should note that as I said before qsort is slower in the lle | 10:12 |
n4nd0 | yes | 10:13 |
blackburn | that is something impossible actually - it seems that in your test | 10:13 |
n4nd0 | yeah ... I don't understand why it turns out to be faster in LLE | 10:14 |
blackburn | qsort had something like O(Nlog N) | 10:14 |
n4nd0 | there is something fishy | 10:14 |
blackburn | it is true but there are N vectors or so | 10:14 |
blackburn | so it should be O(N^2 log N) | 10:14 |
n4nd0 | cover tree? | 10:15 |
n4nd0 | what do you mean it should be O(N? log(N))? | 10:15 |
blackburn | in the LLE at least | 10:15 |
blackburn | I mean if there are N vectors | 10:15 |
blackburn | and we need a neighbor for each of them | 10:16 |
blackburn | it would be N*O(N log N) | 10:16 |
blackburn | with qsort | 10:16 |
blackburn | and should be N*O(log N) with covertree | 10:16 |
n4nd0 | yes | 10:16 |
n4nd0 | but I guess that the implementation of our tree is not that efficient | 10:16 |
n4nd0 | or? | 10:16 |
blackburn | yes but O should stay the same | 10:17 |
blackburn | if it is not log N - wtf it is not a covertree then :D | 10:17 |
blackburn | however it can be that | 10:17 |
n4nd0 | that's why I want to make this test using JL's impl. first | 10:18 |
n4nd0 | if quick sort is still slower, we better ignore it | 10:18 |
blackburn | makes sensse | 10:18 |
blackburn | hmm I'm curious whether they will wake me up if I fall asleep lool | 10:19 |
n4nd0 | haha | 10:20 |
n4nd0 | are you in the job? | 10:20 |
blackburn | yeah | 10:22 |
blackburn | I'm going to leave it actually | 10:22 |
blackburn | in june | 10:23 |
n4nd0 | you focus on GSoC then | 10:24 |
blackburn | right | 10:24 |
n4nd0 | have to go now | 10:28 |
n4nd0 | bye | 10:28 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 10:29 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 10:51 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 10:59 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 10:59 | |
-!- PSmitAalto [82e9b263@gateway/web/freenode/ip.130.233.178.99] has joined #shogun | 11:08 | |
blackburn | wiking: have you seen lecun's paper on EBM? | 11:14 |
blackburn | energy based models | 11:14 |
wiking | i've seen one energy based model | 11:14 |
wiking | but i'm not sure if it's the same | 11:14 |
wiking | do u have full title or link? | 11:14 |
blackburn | yeah | 11:14 |
blackburn | http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf | 11:14 |
blackburn | it looks like kind of generalization for all the latent and SO stuff | 11:15 |
wiking | i have this among my papers on my comp: Efficient Learning of Sparse Representations with an Energy-Based Model | 11:15 |
blackburn | https://groups.google.com/forum/?fromgroups#!forum/google-summer-of-code-discuss | 11:23 |
blackburn | whoa how stupid it is | 11:23 |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has quit [Ping timeout: 245 seconds] | 11:36 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 11:38 | |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has joined #shogun | 11:43 | |
wiking | sonney2k: here? | 11:59 |
-!- wiking_ [~wiking@78-23-191-201.access.telenet.be] has joined #shogun | 12:01 | |
-!- wiking_ [~wiking@78-23-191-201.access.telenet.be] has quit [Changing host] | 12:01 | |
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun | 12:01 | |
-!- wiking_ [~wiking@huwico/staff/wiking] has quit [Remote host closed the connection] | 12:02 | |
-!- wiking_ [~wiking@vpnb172.ugent.be] has joined #shogun | 12:02 | |
-!- wiking_ [~wiking@vpnb172.ugent.be] has quit [Changing host] | 12:02 | |
-!- wiking_ [~wiking@huwico/staff/wiking] has joined #shogun | 12:02 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Ping timeout: 245 seconds] | 12:04 | |
-!- wiking_ is now known as wiking | 12:04 | |
-!- PhilTillet [~Philippe@vir78-1-82-232-38-145.fbx.proxad.net] has joined #shogun | 12:21 | |
PSmitAalto | Hey All | 12:22 |
n4nd0 | PSmitAalto: hey! | 12:22 |
PSmitAalto | I am trying to get a MKLMultiClass to work, but for now I'm getting quite bad results | 12:22 |
PSmitAalto | What is a good strategy for setting the right epsilon, mkl_epsilon and mkl_norm | 12:23 |
n4nd0 | PSmitAalto: ok, I have not used myself so probably I am not the best to give suggestions but | 12:23 |
n4nd0 | have you tried using model selection for that? | 12:23 |
blackburn | PSmitAalto: in which means it is bad? | 12:23 |
PSmitAalto | No I haven't tried to use model selection. | 12:24 |
PSmitAalto | The results are worse than training an svm on a single feature | 12:24 |
blackburn | I see. however I am not a mkl expert as well | 12:25 |
blackburn | my suggestion is to ask about that using mailing list | 12:25 |
blackburn | IIRC alex binder reads it and could answer | 12:26 |
PSmitAalto | Ok, I'll do that | 12:26 |
PSmitAalto | Thanks anyway! | 12:26 |
blackburn | you may also glance over mkl paper | 12:27 |
blackburn | http://jmlr.csail.mit.edu/papers/volume12/kloft11a/kloft11a.pdf | 12:28 |
PSmitAalto | That is even better! I'll go through that one first | 12:28 |
PhilTillet | hello everybody | 12:29 |
blackburn | hey | 12:37 |
wiking | :> | 12:38 |
wiking | let's see if i'm gonna have the same troubles with mkl | 12:38 |
blackburn | yeah. @PSmitAalto wiking is on the same way as you trying some mkl stuff | 12:40 |
wiking | PSmitAalto: i'll be having results within 20 mins | 12:42 |
blackburn | genix: here? | 12:43 |
PSmitAalto | Ok, nice | 12:43 |
genix | blackburn, yep | 12:54 |
genix | hi all | 12:54 |
blackburn | genix: small task | 12:55 |
blackburn | genix: do you know how does sg_progress stuff work? | 12:55 |
-!- genix [~gsomix@188.168.13.216] has quit [Ping timeout: 260 seconds] | 12:59 | |
blackburn | oh he is afraid of sg_progress probably *lol* | 12:59 |
n4nd0 | :D | 13:00 |
-!- genix [~gsomix@85.26.165.173] has joined #shogun | 13:00 | |
blackburn | n4nd0: once you are finished with covertree I can suggest you some framework stuff as well (some simple memory measurement system) | 13:00 |
blackburn | memory usage* | 13:01 |
genix | blackburn, what up? | 13:01 |
genix | *s | 13:01 |
-!- genix is now known as gsomix | 13:01 | |
blackburn | gsomix: I was asking you if you are acknowledged how SG_PROGRESS works | 13:01 |
blackburn | are you? | 13:01 |
gsomix | blackburn, no, I am not | 13:03 |
blackburn | https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/classifier/ConjugateIndex.cpp | 13:03 |
blackburn | line 134 | 13:03 |
blackburn | it is used to indicate the reached progress | 13:03 |
blackburn | to start with you could add similar stuff there: | 13:04 |
blackburn | https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/classifier/GaussianNaiveBayes.cpp | 13:04 |
gsomix | blackburn, hm, ok. | 13:06 |
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds] | 13:07 | |
gsomix | blackburn, how can I check the current progress? | 13:07 |
gsomix | ah, stop | 13:08 |
gsomix | I figured out | 13:08 |
blackburn | gsomix: http://pastebin.com/ifFsCG5c | 13:09 |
blackburn | use something like that | 13:09 |
gsomix | ok | 13:09 |
blackburn | however that would be too fast | 13:09 |
blackburn | increase N and number of classes | 13:09 |
blackburn | currently it would be 50% and 100% only | 13:10 |
blackburn | gsomix: a little more ambiguous is to progress some svm solvers | 13:14 |
-!- genix [~gsomix@188.168.14.52] has joined #shogun | 13:19 | |
-!- gsomix [~gsomix@85.26.165.173] has quit [Ping timeout: 246 seconds] | 13:19 | |
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun | 13:19 | |
n4nd0 | blackburn: hey! what is that framework for memory measurement you were talking about? | 13:31 |
-!- pluskid [~chatzilla@111.120.9.128] has joined #shogun | 13:31 | |
blackburn | n4nd0: I think that would be nice to have some system that measures memory usage | 13:31 |
blackburn | i.e. you could get how much time classifier ate while training | 13:32 |
blackburn | or applying | 13:32 |
blackburn | err no time | 13:32 |
blackburn | memory | 13:32 |
blackburn | pluskid: want simple task? | 13:32 |
n4nd0 | blackburn: haha you are with ideas today :) | 13:32 |
blackburn | yes | 13:32 |
blackburn | *nothing to do at job* | 13:32 |
n4nd0 | blackburn: :D do you have something particular in mind / somewhere you have seen something similar before? | 13:33 |
blackburn | not really | 13:33 |
blackburn | ok the idea is rather simple | 13:33 |
blackburn | but could be extended seriously | 13:33 |
blackburn | n4nd0: simplest way is to add SG_MALLOC_N or so | 13:34 |
blackburn | it could store name of block | 13:34 |
blackburn | however I don't really like it | 13:34 |
blackburn | better would be to add new class | 13:35 |
blackburn | and each malloc should add memory usage info to it | 13:35 |
blackburn | once you have some get_memory_stats method in SGObject you can check all you need | 13:36 |
n4nd0 | blackburn: I think I understand the idea | 13:36 |
pluskid | blackburn: what's that? | 13:37 |
blackburn | n4nd0: however better try to finish covertree - would be useful | 13:37 |
blackburn | pluskid: https://github.com/shogun-toolbox/shogun/blob/master/src/shogun/classifier/GaussianNaiveBayes.cpp currently it uses whole training matrix to train | 13:37 |
blackburn | shouldn't be so | 13:37 |
pluskid | what's the difference between a Gaussian NB and an ordinary NB? | 13:38 |
pluskid | OK | 13:39 |
pluskid | I find the doc | 13:39 |
pluskid | I'll look at it | 13:39 |
n4nd0 | blackburn: yes, I am going to devote some work to cover tree | 13:39 |
n4nd0 | blackburn: but I think it will take me some effort | 13:40 |
blackburn | pluskid: it assumes that features are continuous and gaussian distributed | 13:40 |
n4nd0 | the code is a total mess to be honest | 13:40 |
blackburn | pluskid: no way to use NB with continuous variables you know | 13:40 |
blackburn | (if you don't know distribution) | 13:40 |
pluskid | yeah | 13:40 |
blackburn | n4nd0: yeah but it is the reality - svm solvers are not much better I think | 13:41 |
n4nd0 | blackburn: it is good to get training in this aspect too :) | 13:43 |
n4nd0 | and even if I complain, this must be better than starting from scratch | 13:43 |
blackburn | hah sure | 13:43 |
blackburn | writing covertree from scratch would be a headache | 13:44 |
pluskid | blackburn: you mean should not get the whole matrix in a whole, but use get_computed_dot_feature_vector one by one? | 13:51 |
blackburn | pluskid: exactly | 13:51 |
pluskid | ok | 13:51 |
n4nd0 | blackburn: don't you think that these cases of computing the whole matrix or re-computing features | 13:54 |
n4nd0 | blackburn: something similar to SPE with the distance matrix | 13:55 |
n4nd0 | blackburn: should be done in both ways? | 13:55 |
blackburn | n4nd0: yes and again my code | 13:55 |
blackburn | ah | 13:55 |
blackburn | there - not really | 13:55 |
n4nd0 | blackburn: why? | 13:55 |
blackburn | it is not really faster to compute whole matrix | 13:55 |
n4nd0 | blackburn: do you think so? | 13:55 |
blackburn | yes - that should be the same | 13:55 |
n4nd0 | mmm I thought it should be faster | 13:56 |
blackburn | in case of simple features and available feature matrix | 13:56 |
blackburn | no difference to get feature vector or to get feature matrix | 13:56 |
blackburn | just pointer right? | 13:56 |
blackburn | n4nd0: btw that gnb is really fast | 13:58 |
blackburn | you could try on that data I gave to you | 13:59 |
n4nd0 | blackburn: Gaussian Naive Bayes? | 13:59 |
blackburn | n4nd0: yes | 13:59 |
blackburn | that thing was written by me when I was young^W applying for gsoc 2011 btw | 14:00 |
n4nd0 | blackburn: my point with the feature / distance matrices is that if we don't pre-compute them, some things may be computed several times | 14:00 |
n4nd0 | blackburn: haha are you old already :P | 14:01 |
blackburn | n4nd0: well yes but it is cost of doing some large-scale applicable stuff | 14:01 |
-!- genix is now known as gsomix | 14:01 | |
n4nd0 | blackburn: yes, but that is why I think that it could be better to offer both possibilities | 14:02 |
n4nd0 | in some application it can be interesting to compute the whole matrix, in some others not | 14:02 |
blackburn | n4nd0: anyway in the stuff you did it is ok to use callback | 14:03 |
blackburn | because callback can access precomputed stuff as well | 14:03 |
n4nd0 | e.g. for using the cover tree, pluskid measured that there where lot of acceses to distances | 14:03 |
blackburn | err not callback - virtual function | 14:03 |
blackburn | code would be messy if you would support matrices as well | 14:04 |
n4nd0 | we can make in transparent using a class and overloading '()' | 14:06 |
n4nd0 | don't you think so? | 14:06 |
n4nd0 | I think that override may be a better word here | 14:07 |
blackburn | why don't you like ->distance(i,j)? | 14:07 |
n4nd0 | I like it | 14:08 |
n4nd0 | that's why I mean the code would not turn to be a mess | 14:08 |
n4nd0 | the difference would be that actually | 14:08 |
blackburn | I mean it already supports precomputing | 14:08 |
n4nd0 | distance(i,j) would do inside | 14:08 |
n4nd0 | distance_mat[i + N*j] if the matrix is precomputed | 14:09 |
n4nd0 | or compute the distance between features i and j otherwise | 14:09 |
blackburn | n4nd0: yes so what you want to change? | 14:17 |
n4nd0 | blackburn: add and option as a class member that allows to do this selection | 14:18 |
n4nd0 | blackburn: what do you think? | 14:18 |
blackburn | n4nd0: it is manageable already with customdistance/kernel | 14:18 |
blackburn | look | 14:18 |
blackburn | https://github.com/lisitsyn/shogun/blob/libedrt/src/shogun/converter/LocallyLinearEmbedding.cpp | 14:19 |
blackburn | line 176 | 14:19 |
blackburn | here I added that stuff with no option but it is rather easy to change | 14:20 |
n4nd0 | ok | 14:21 |
blackburn | option should be there but it should not be handled with anything but Custom* | 14:21 |
blackburn | n4nd0: so no need to modify distance stuff | 14:21 |
n4nd0 | by CustomKernel? | 14:21 |
blackburn | kernel or distance | 14:22 |
blackburn | however there should be some caching I think | 14:22 |
blackburn | i.e. it would be nice to have function that caches distances to neighbors | 14:23 |
blackburn | I will add one later I think | 14:23 |
-!- pluskid [~chatzilla@111.120.9.128] has quit [Quit: ChatZilla 0.9.88.2 [Firefox 11.0/20120314124128]] | 14:26 | |
-!- pluskid [~chatzilla@173.254.214.60] has joined #shogun | 14:27 | |
-!- PSmitAalto [82e9b263@gateway/web/freenode/ip.130.233.178.99] has quit [Quit: Page closed] | 14:39 | |
n4nd0 | blackburn: did you try JL covertree? | 14:41 |
blackburn | n4nd0: no I thought you will :D | 14:42 |
n4nd0 | blackburn: haha yes, I am doing it | 14:42 |
pluskid | here comes the pull request | 14:42 |
n4nd0 | blackburn: it was to compare opinions | 14:42 |
n4nd0 | blackburn: I am surprised about the time it takes to construct it | 14:43 |
n4nd0 | blackburn: here it seems that it is constructed in 0.178523 seconds using 37749 vectors | 14:43 |
blackburn | pluskid: I will be able to merge it within 1-2hrs | 14:43 |
blackburn | n4nd0: wow nice | 14:44 |
n4nd0 | blackburn: that is quite different from the time of the other day with the other tree | 14:44 |
blackburn | not 423432423 seconds dncrane's did | 14:44 |
pluskid | blackburn: ok, I'll back to LARS then | 14:44 |
pluskid | tell me if there's any problems | 14:44 |
blackburn | sure | 14:45 |
blackburn | pluskid: thanks! | 14:46 |
pluskid | u r wel | 14:46 |
blackburn | I think I need to update news today | 14:46 |
pluskid | so? | 14:47 |
blackburn | to include all your (gsocers) contributions | 14:48 |
pluskid | on the homepage? | 14:48 |
blackburn | no, in repo | 14:48 |
pluskid | didn't see it | 14:49 |
pluskid | is there a NEWS file in the repo? | 14:49 |
blackburn | https://github.com/shogun-toolbox/shogun/blob/master/src/NEWS | 14:50 |
pluskid | oh, got it! | 14:50 |
-!- PhilTillet [~Philippe@vir78-1-82-232-38-145.fbx.proxad.net] has quit [Remote host closed the connection] | 14:51 | |
-!- blackburn [5bdfb203@gateway/web/freenode/ip.91.223.178.3] has quit [Quit: Page closed] | 15:02 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 245 seconds] | 15:10 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun | 15:11 | |
n4nd0 | shogun-buildbot: did you get tired of IRC? | 15:13 |
-!- shogun-buildbot [~shogun-bu@7nn.de] has quit [Ping timeout: 245 seconds] | 15:20 | |
-!- shogun-buildbot [~shogun-bu@7nn.de] has joined #shogun | 15:22 | |
-!- PhilTillet [~android@92.90.16.70] has joined #shogun | 15:57 | |
-!- PhilTillet [~android@92.90.16.70] has quit [Ping timeout: 260 seconds] | 16:01 | |
@sonney2k | n4nd0, hmmhh we have memory tracing already... | 16:06 |
@sonney2k | can be enabled with --enable-trace-mallocs IIRC | 16:06 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Read error: Operation timed out] | 16:08 | |
@sonney2k | wiking, whats up? | 16:10 |
wiking | sonney2k: i wanted to ask some stuff about mkl | 16:13 |
@sonney2k | ask | 16:13 |
@sonney2k | (I hate mkl btw) | 16:13 |
wiking | ahahhaha cool | 16:13 |
@sonney2k | close to useless crap :D | 16:14 |
wiking | your name is on the paper :))) | 16:14 |
wiking | since now i have like really different kind of features that's why i thought that it'd be great to use it | 16:14 |
wiking | you reckon it's really wouldn't make a much of a difference? | 16:15 |
@sonney2k | wiking, well you can even use different features without mkl | 16:15 |
@sonney2k | just use several kernels | 16:15 |
wiking | well yeah i've tried that | 16:15 |
@sonney2k | with shogun | 16:15 |
@sonney2k | (CombinedKernel) | 16:15 |
wiking | mmm | 16:15 |
wiking | but can i just chuck in a combined kernel | 16:15 |
wiking | to any solver? | 16:15 |
wiking | since now i'm just trying to follow the example from the tutorials | 16:16 |
wiking | i've following: ../examples/documented/python_modular/mkl_multiclass_modular.py | 16:17 |
gsomix | sonney2k, moin. how are you? | 16:29 |
@sonney2k | wiking, any kernel machine yes | 16:31 |
@sonney2k | wiking, the mkl stuff is only to learn the weights in front of the linear combination | 16:31 |
wiking | mmm but i guess the only difference then is that it doesn't compute the weighting i guess... | 16:31 |
@sonney2k | yes | 16:32 |
@sonney2k | and actually the weighting can make things worse - or better - depending on your problem :D | 16:32 |
@sonney2k | gsomix, tired - watched demos too long tonight | 16:32 |
wiking | oh great | 16:32 |
wiking | then first i just check out a simple combined kernel stuff | 16:32 |
@sonney2k | my experience is that mkl_norm=2 can help | 16:34 |
@sonney2k | (a tiny bit) | 16:34 |
wiking | ehehhe ok will give it a go as well | 16:34 |
wiking | it's just that i think a combined kernel must give me a better result | 16:34 |
@sonney2k | it usually is better to work on actual features, do fine grained model selection etc | 16:34 |
@sonney2k | wiking, usually yes | 16:35 |
wiking | than concatenating each feature and using one type of kernel on it | 16:35 |
@sonney2k | but you have to normalize data / kernels | 16:35 |
wiking | since the features are really different in nature... | 16:35 |
@sonney2k | and that itself can be tricky :D | 16:35 |
@sonney2k | and mkl won't help you with this either... | 16:35 |
wiking | hehehehe | 16:35 |
wiking | ok let's see how it works out | 16:35 |
wiking | i'm just still having fun with loading the features into shogun in a good way | 16:36 |
wiking | since they are in various formats :) | 16:36 |
-!- muddo [~muddo@gateway/tor-sasl/muddo] has quit [Ping timeout: 276 seconds] | 16:39 | |
-!- muddo [~muddo@gateway/tor-sasl/muddo] has joined #shogun | 16:42 | |
-!- V[i]ctor [~victor@host-176-100-246-254.masterbit.su] has quit [Quit: Leaving.] | 16:55 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 17:04 | |
-!- PhilTillet [~Philippe@157.159.42.154] has joined #shogun | 17:06 | |
PhilTillet | Hi hi | 17:06 |
n4nd0 | sonney2k: hey! how is it going? | 17:08 |
n4nd0 | sonney2k: so we have already memory management implemented in shogun? | 17:09 |
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has joined #shogun | 17:11 | |
-!- V[i]ctor [~victor@host-176-100-246-254.masterbit.su] has joined #shogun | 17:19 | |
wiking | doh i'm doing something wrong | 17:27 |
wiking | [ERROR] Index out of Range: idx_a=0/0 idx_b=0/0 | 17:27 |
wiking | oh i know what :> | 17:28 |
wiking | sonney2k: still around? i just wonder for example in the case of ../examples/documented/python_modular/kernel_combined_custom_poly_modular.py | 17:41 |
wiking | why do you create twice the CombinedKernel.... | 17:41 |
wiking | wouldn't it be enough to create it once...? and then just use different kernel.init calls? | 17:42 |
-!- pluskid [~chatzilla@173.254.214.60] has quit [Quit: ChatZilla 0.9.88.2 [Firefox 11.0/20120314124128]] | 17:46 | |
-!- PhilTillet [~Philippe@157.159.42.154] has quit [Remote host closed the connection] | 18:20 | |
wiking | is there a function for merging/concatenating simple features? | 18:21 |
-!- PhilTillet [~Philippe@157.159.42.154] has joined #shogun | 18:43 | |
@sonney2k | n4nd0, well one can list blocks of memory that are allocated | 19:12 |
@sonney2k | n4nd0, so everything that SG_MALLOC etc allocates can be traced when ./configure --enable-trace-mallocs is on | 19:12 |
@sonney2k | wiking, use CCombinedDotFeatures | 19:12 |
@sonney2k | then you can mix dense, sparse, xxx features | 19:12 |
@sonney2k | and use any e.g. linear classifier with it | 19:13 |
@sonney2k | or if you want to mix any kind of feature types, CCombinedFeatures | 19:13 |
@sonney2k | wiking, yes one kernel is sufficient | 19:13 |
PhilTillet | sonney2k, I have a question :D Why isn't the Gaussian Kernel implemented as a CDistanceKernel? | 19:14 |
wiking | sonney2k: heheh yeah i've tried with one kernel and worked... i'm trying now combined kernel but i want some of my features to be handled by one type of kernel, that's why i want to concatenate some features | 19:14 |
wiking | ah i see append_feature_obj should work for CCombinedDotFeatures | 19:15 |
-!- harshit_ [~harshit@182.64.221.94] has joined #shogun | 19:30 | |
wiking | mmm i've found NormOne preprocessor but is there NormTwo, or NormN for that matter ? as i cannot find it :( | 19:30 |
harshit_ | n4nd0: hola ! , wassup .! | 19:32 |
gsomix | http://piccy.info/view3/2869209/689d46487f8b6162bd1c6c1534b630f9/ Samara city... =___= | 19:33 |
n4nd0 | harshit_: hey! how is it going? | 19:38 |
wiking | n4nd0: yo | 19:40 |
V[i]ctor | hi! | 19:43 |
@sonney2k | wiking, no | 19:44 |
wiking | sonney2k: mmm i guess then it's time to have one :) | 19:45 |
@sonney2k | heh | 19:45 |
wiking | can it go as preprocessor ? just like in case of NormOne ? | 19:45 |
@sonney2k | wiking, look at what normone does | 19:45 |
wiking | yep i've checked | 19:45 |
@sonney2k | not sure what you want to do... | 19:46 |
wiking | well what if i want norm-2 | 19:46 |
n4nd0 | wiking: hey | 19:46 |
wiking | so the euclidean norm | 19:46 |
wiking | or any p-norm for that matter... where p>=1 | 19:48 |
wiking | that's why i thought having a NormP preprocessor would be good... which by defaults calculates the 2-normalized vectors of a feature matrix... | 19:49 |
harshit_ | n4nd0: good here :) | 19:52 |
harshit_ | n4nd0: Do you have any experience with semi supervised algos ? | 19:52 |
n4nd0 | harshit_: I have studied some EM stuff but I think that fits better in unsupervised | 19:53 |
n4nd0 | harshit_: at least that's the way I remember I studied it | 19:54 |
harshit_ | yeah EM is mostly unsupervised, have a look at the mail I sent | 19:54 |
harshit_ | If you have any idea about those implementations, do tell me | 19:55 |
CIA-64 | shogun: pluskid master * r1179258 / (3 files in 2 dirs): GNB: replace SGVector with SGMatrix to make the code easier to read. - http://git.io/hoqSCA | 19:59 |
CIA-64 | shogun: pluskid master * rf87cd67 / src/shogun/classifier/GaussianNaiveBayes.cpp : Fetch one feature vector at a time instead of the whole feature matrix for potential large-scale training. - http://git.io/RZ68Fw | 19:59 |
CIA-64 | shogun: pluskid master * rcaba5ca / examples/undocumented/python_modular/classifier_gaussiannaivebayes_modular.py : Remove debug code. - http://git.io/DxYHBg | 19:59 |
-!- blackburn [~qdrgsm@83.234.54.186] has joined #shogun | 20:00 | |
blackburn | mighty blackburn here lol | 20:03 |
PhilTillet | hi :) | 20:10 |
blackburn | hi | 20:10 |
PhilTillet | did you sleep well? | 20:11 |
blackburn | oh not really | 20:11 |
blackburn | :D | 20:11 |
PhilTillet | =D | 20:11 |
-!- harshit_ [~harshit@182.64.221.94] has quit [Quit: Leaving] | 20:12 | |
gsomix | blackburn, yo. | 20:17 |
blackburn | hi-hi | 20:20 |
blackburn | two days left for announcement of slots probably | 20:21 |
PhilTillet | yes.. | 20:22 |
@sonney2k | wiking, NormOne ensures actually ||x||_2 = 1 | 20:25 |
blackburn | SumOne is for L1 | 20:26 |
wiking | sonney2k: yeah i've realized, and i wondered why is it called NormOne when it's norm-2 :P | 20:26 |
blackburn | however yes I agree it should be generalized | 20:26 |
wiking | ok i'll make a fast one now | 20:26 |
wiking | as i need it anyways | 20:26 |
@sonney2k | well the norm of each vector is one :) | 20:26 |
blackburn | wiking: kind of norm->one | 20:26 |
@sonney2k | after processing . | 20:26 |
blackburn | wiking: could you please also specialize that for p=2 with cblas_dnrm2? | 20:31 |
wiking | ? | 20:31 |
wiking | explain more :) | 20:32 |
@sonney2k | speedup the code! | 20:32 |
blackburn | I know it is more stable in means of underflow and overflow | 20:32 |
wiking | ahhahahaha | 20:32 |
blackburn | than dot of two vectors | 20:32 |
blackburn | wiking: yeah seriously | 20:33 |
wiking | doin' it | 20:33 |
blackburn | dnrm2 is better than ddot | 20:33 |
wiking | just a sec | 20:33 |
blackburn | [1] http://fseoane.net/blog/2011/computing-the-vector-norm/ | 20:34 |
blackburn | :D | 20:34 |
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has quit [Quit: ( www.nnscript.com :: NoNameScript 4.22 :: www.esnation.com )] | 20:42 | |
@sonney2k | blackburn, I think we should then update CMath::twonorm too :) | 21:02 |
blackburn | sonney2k: we have CMath::twonorm? | 21:02 |
blackburn | lol | 21:02 |
@sonney2k | hehe | 21:02 |
wiking | ok | 21:05 |
wiking | ready | 21:05 |
wiking | oups almost... 1.0 norm is missing... :) | 21:05 |
n4nd0 | blackburn: hey! so it looks that sonney2k pointed out a solution that already exists for the memory allocation framework you suggested | 21:10 |
n4nd0 | blackburn: do you think there is anything else that you were thinking of but not covered there? | 21:10 |
blackburn | n4nd0: yes only small patch should be there probably | 21:10 |
@sonney2k | blackburn, what is missing? | 21:10 |
blackburn | yes, we can also store relation to object | 21:10 |
blackburn | I mean now we can't check how memory libsvm ate | 21:11 |
@sonney2k | only in total yes / no annotations or so | 21:11 |
@sonney2k | blackburn, http://sonnenburgs.de/soeren/media/images/gsoc2012-proposals-stats.png | 21:12 |
wiking | sonney2k blackburn here's the implementation of p-norm: https://github.com/vigsterkr/shogun/commit/01c13468de45eec7616ecbbdb7ad1823fd029646 | 21:12 |
blackburn | oh how many links ;) | 21:12 |
wiking | if you have any comments let me know otherwise i do the things for modular interface and i'll submit a pull request | 21:12 |
blackburn | sonney2k: whoa a few just before deadline | 21:13 |
blackburn | wiking: please use_that_naming | 21:14 |
wiking | ah shit ok forgot :)) | 21:14 |
n4nd0 | sonney2k: nice graph :) | 21:15 |
wiking | blackburn: ok pushed... | 21:16 |
blackburn | okay I have to go for a while | 21:16 |
blackburn | wiking: Soeren will merge I think ;) | 21:17 |
wiking | blackburn: is the lapack handling ok? | 21:17 |
wiking | i suppose so just want to double check | 21:17 |
blackburn | yes | 21:17 |
wiking | ok | 21:17 |
wiking | then i'll do the modular thingy and then i'll do a commit + pull request | 21:17 |
wiking | and then it's up to you guys if you want to get rid of NormOne and SumOne :P | 21:18 |
wiking | pull request sent... | 21:23 |
shogun-buildbot | build #462 of python_modular is complete: Failure [failed test_1] Build details are at http://www.shogun-toolbox.org/buildbot/builders/python_modular/builds/462 blamelist: pluskid@gmail.com | 21:25 |
@sonney2k | OK I blogged about this stats: http://sonnenburgs.de/soeren/category/blog/#shogun-student-applications-statistics-google-summ | 21:43 |
@sonney2k | wiking, we cannot get rid of these things completely (for compatibility) | 21:44 |
wiking | okey no worries | 21:44 |
wiking | well what can be done is that simply do an inheritance from this class to NormOne and SumOne | 21:46 |
gsomix | good night, guys | 21:46 |
PhilTillet | good night gsomix | 21:46 |
wiking | mmmm mkl is quite slow :( | 21:46 |
@sonney2k | wiking, as I said don't do it :) | 21:48 |
@sonney2k | wiking, what are you doing btw? | 21:48 |
@sonney2k | gsomix, have a good sleep | 21:48 |
wiking | sonney2k: well doing a combined kernel learning | 21:48 |
wiking | mmm ok i'll do the changes soon | 21:49 |
@sonney2k | wiking, with mkl or just with svm? | 21:50 |
wiking | well both | 21:50 |
wiking | i'm trying with gnmpsvm | 21:50 |
@sonney2k | ? | 21:50 |
@sonney2k | so just svm | 21:51 |
wiking | it works nice | 21:51 |
wiking | but i thought i give it a go with mkl | 21:51 |
@sonney2k | how big is the kernel cache? | 21:51 |
@sonney2k | how big is the data | 21:51 |
wiking | default | 21:51 |
shogun-buildbot | build #456 of ruby_modular is complete: Failure [failed test_1] Build details are at http://www.shogun-toolbox.org/buildbot/builders/ruby_modular/builds/456 blamelist: pluskid@gmail.com | 21:51 |
@sonney2k | which kernels? | 21:51 |
wiking | JensenShannon + LINEAR | 21:51 |
@sonney2k | wiking, did you use normone preproc ? | 21:51 |
wiking | the data... mmm it's about 1500 features | 21:51 |
@sonney2k | err kernel normalizer | 21:51 |
wiking | sonney2k: haven't tried kernel normalizer | 21:52 |
@sonney2k | wiking, it is sth you need ... I guess sqrtdiagkernelnormalizer is what you want :) | 21:52 |
@sonney2k | wiking, 1500 examples or dims? | 21:52 |
wiking | dims | 21:53 |
wiking | ok so what kernel normalizer should i use | 21:54 |
@sonney2k | and #examples? | 21:54 |
@sonney2k | sqrtdiagkernelnormalizer | 21:54 |
wiking | and only on the the combinedkernel? | 21:54 |
wiking | or on each subkernels? | 21:54 |
@sonney2k | wiking, no for each submkernel | 21:54 |
wiking | ok | 21:54 |
wiking | let's see how that helps | 21:54 |
wiking | why exactly sqrtdiagkernelnormalizer ? | 21:55 |
@sonney2k | trace(K)=1 | 21:55 |
wiking | ok adding them :))) | 21:56 |
wiking | trace(K) ? | 21:57 |
wiking | number of examples are 1k | 21:58 |
blackburn | ?? | 22:01 |
blackburn | re | 22:01 |
blackburn | :D | 22:01 |
wiking | yo | 22:01 |
@sonney2k | wiking, but then it is fast right? I mean worst case precompute the matrix and done | 22:02 |
wiking | dooooh | 22:06 |
wiking | mkl finished.... worst results than with a simple svm | 22:07 |
wiking | significantly :( | 22:07 |
blackburn | I think it is totally wrong idea :D | 22:07 |
wiking | ahhahahahaha | 22:07 |
blackburn | I mean mkl (though I am not an expert right) | 22:07 |
wiking | ok i'm trying now the normalization of the sub kernels :) | 22:09 |
@sonney2k | wiking, ahh and don't forget to adjust C | 22:10 |
@sonney2k | anyways sleep time for me | 22:10 |
@sonney2k | cu | 22:10 |
wiking | sonney2k: where in mkl or svm? or both? :) | 22:10 |
@sonney2k | mkl C use 0 always | 22:11 |
@sonney2k | only svm | 22:11 |
wiking | ah ok maybe that was the problem :))) | 22:11 |
wiking | i've used 1.0 | 22:11 |
wiking | sonney2k gnite and thnx for the tips | 22:15 |
PhilTillet | gnight sonney2k | 22:16 |
n4nd0 | sonney2k: good night | 22:19 |
blackburn | oh I have to say this | 22:20 |
blackburn | sonney2k: good night!! | 22:20 |
PhilTillet | :p | 22:20 |
PhilTillet | blackburn, do I still have some fixes to do in my nearest centroid pull request? :s | 22:21 |
blackburn | ah | 22:21 |
PhilTillet | maybe adding shrinking? | 22:21 |
blackburn | what kind of shrinking? :) | 22:22 |
PhilTillet | I have seen on google that there was a variant called nearest shrunk centroids | 22:22 |
PhilTillet | it shrinks centroids towards zero | 22:22 |
PhilTillet | shrunken* :p | 22:23 |
blackburn | oh I don't know | 22:24 |
blackburn | just fix things I noted | 22:24 |
PhilTillet | well I did it | 22:26 |
PhilTillet | didn't you see the new commit? :p | 22:26 |
blackburn | https://github.com/shogun-toolbox/shogun/pull/433 | 22:27 |
blackburn | can you? | 22:27 |
PhilTillet | oh, seems like i made a mistake in my commit name | 22:27 |
PhilTillet | i can see 69cca92 | 22:27 |
PhilTillet | * Fixed the example include and comments | 22:27 |
PhilTillet | but there are other stars in that commit with other things i fixed | 22:28 |
PhilTillet | :p | 22:28 |
blackburn | hmm right strange | 22:28 |
blackburn | why comments are after that | 22:28 |
PhilTillet | I don't know :o | 22:29 |
blackburn | okay more commits then | 22:29 |
PhilTillet | yes, actually i had many different commits, but rebased them into one | 22:29 |
PhilTillet | probably bad idea | 22:29 |
PhilTillet | that's probably why it appears before too | 22:31 |
PhilTillet | won't do that again :D | 22:31 |
blackburn | PhilTillet: okay commented some stuff | 22:34 |
PhilTillet | saw that, I don't understand the free thing, target is just a pointer, not allocated | 22:35 |
blackburn | actually you should rather use DotFeatures here I think | 22:37 |
PhilTillet | ah, you are talking about float64_t* current ? | 22:38 |
blackburn | no | 22:38 |
blackburn | okay actually it is ok for simple features | 22:38 |
blackburn | however there is a do_free stuff you know | 22:38 |
blackburn | it is supposed for such things | 22:38 |
blackburn | workflow is to get and call free with do_free | 22:39 |
blackburn | it sets do_free automagically and frees according to its value | 22:39 |
PhilTillet | okay got it | 22:46 |
PhilTillet | and concerning the other comment, 1.0/(float64_t)num_per_class[i], if I got it right I should replace with 1.0/((float64_t)num_per_class[i]-1) right? | 22:49 |
blackburn | yeah | 22:49 |
blackburn | but consider case with N=1 | 22:50 |
PhilTillet | yes | 22:50 |
PhilTillet | that's what i was going to point out | 22:50 |
PhilTillet | :D | 22:50 |
PhilTillet | understood | 22:50 |
PhilTillet | I just check it still executes fine and I push | 22:55 |
blackburn | ok | 22:55 |
-!- harshit_ [~harshit@182.64.221.94] has joined #shogun | 23:03 | |
shogun-buildbot | build #463 of python_modular is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/python_modular/builds/463 | 23:04 |
harshit_ | blackburn: hi,Got your mail.! what do you mean by " What you would need is to make use of its semi-supervised capabilities there." | 23:04 |
harshit_ | does that mean that I need to create an example of it ? | 23:05 |
blackburn | harshit_: I mean we currently do not provide abilities to train semi-supervised there | 23:05 |
harshit_ | ohk | 23:05 |
blackburn | so you would need to create some class for that | 23:06 |
harshit_ | I need to go through SVMlin implementation in shogun, Before saying anything else ! | 23:06 |
harshit_ | blackburn: A wrapper kind of class ? | 23:07 |
blackburn | yes | 23:07 |
harshit_ | got it | 23:07 |
harshit_ | thanks,.. | 23:08 |
-!- genix [~gsomix@188.168.4.3] has joined #shogun | 23:09 | |
blackburn | n4nd0: ah btw! are you here? | 23:09 |
n4nd0 | blackburn: yeah, tell me | 23:09 |
-!- gsomix [~gsomix@188.168.14.52] has quit [Ping timeout: 260 seconds] | 23:09 | |
blackburn | n4nd0: heiko added subset of subset of subset of subset support | 23:09 |
n4nd0 | haha | 23:09 |
blackburn | so kernel multiclass stuff should work now | 23:09 |
n4nd0 | cool | 23:09 |
n4nd0 | I should try it | 23:10 |
shogun-buildbot | build #457 of ruby_modular is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/ruby_modular/builds/457 | 23:10 |
n4nd0 | blackburn: what are you working on lately? | 23:11 |
blackburn | I'd say nothing :D | 23:12 |
blackburn | university kills my time this week | 23:12 |
PhilTillet | blackburn, I updated my pull request :) | 23:12 |
n4nd0 | university stuff then | 23:12 |
blackburn | n4nd0: sad true | 23:13 |
blackburn | truth I mean lol | 23:13 |
CIA-64 | shogun: Sergey Lisitsyn master * r7b3b7bc / src/NEWS : Updated NEWS - http://git.io/nGKAMQ | 23:14 |
n4nd0 | blackburn: got you :D | 23:14 |
blackburn | n4nd0: damn heiko did not add add_subset thing to labels | 23:18 |
n4nd0 | blackburn: :O | 23:19 |
blackburn | only to features | 23:19 |
blackburn | not damn heiko :D | 23:19 |
blackburn | damn. heiko | 23:19 |
n4nd0 | lol | 23:20 |
blackburn | ah I'll finish that later | 23:22 |
n4nd0 | ok | 23:22 |
blackburn | damn it is getting warm here | 23:22 |
blackburn | was -5 week before and +15 | 23:22 |
blackburn | already | 23:22 |
PhilTillet | hehe | 23:24 |
PhilTillet | Russia is not the warmest country ever :p | 23:24 |
n4nd0 | oh +15 | 23:26 |
n4nd0 | we don't have that temperature here yet | 23:26 |
PhilTillet | Spain is very warm on summer though :p | 23:27 |
PhilTillet | I used to go near Barcelona | 23:27 |
blackburn | almost no snow there left | 23:27 |
blackburn | PhilTillet: I guess he is in Sweden ;) | 23:28 |
PhilTillet | ooooh | 23:28 |
blackburn | okay see you tomorrow guys | 23:28 |
PhilTillet | see you tomorrow :) | 23:28 |
-!- blackburn [~qdrgsm@83.234.54.186] has quit [Ping timeout: 248 seconds] | 23:33 | |
n4nd0 | good night | 23:33 |
-!- PhilTillet [~Philippe@157.159.42.154] has quit [Quit: Leaving] | 23:52 | |
--- Log closed Tue Apr 10 00:00:19 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!