--- Log opened Sat Sep 10 00:00:12 2011 | ||
blackburn | broken in 0.9 | 00:10 |
---|---|---|
@sonney2k | ugh | 00:20 |
@sonney2k | I really thought I have checked these | 00:21 |
blackburn | sonney2k: will check some more tags tomorrow | 00:23 |
blackburn | and btw I have to integrate superlu as soon as possible | 00:24 |
blackburn | our LLE is slower than scikits-learn one | 00:24 |
blackburn | it is a blocker for my possible paper about our implementations | 00:24 |
@sonney2k | heh :) | 00:25 |
@sonney2k | go ahead | 00:25 |
@sonney2k | and the GNB needs fixing too | 00:25 |
@sonney2k | anyways | 00:25 |
@sonney2k | night | 00:25 |
blackburn | yeah | 00:25 |
blackburn | a lot of things to fix | 00:26 |
blackburn | I will have a hard next week, but I hope later it will go smoother | 00:26 |
blackburn | there will be some development kick-off at my job | 00:26 |
blackburn | see you | 00:27 |
-!- blackburn [~blackburn@31.28.44.65] has quit [Quit: Leaving.] | 00:27 | |
-!- serialhex [~quassel@99-101-148-183.lightspeed.wepbfl.sbcglobal.net] has joined #shogun | 02:05 | |
-!- in3xes [~in3xes@180.149.49.227] has joined #shogun | 04:52 | |
-!- blackburn [~blackburn@31.28.44.65] has joined #shogun | 09:49 | |
-!- in3xes [~in3xes@180.149.49.227] has quit [Ping timeout: 258 seconds] | 10:19 | |
CIA-3 | shogun: Sergey Lisitsyn master * r41f17ea / src/configure : Added SuperLU detection - http://git.io/PMelQA | 11:18 |
@sonney2k | blackburn, ok | 12:28 |
blackburn | sonney2k: ok for what? | 12:29 |
@sonney2k | no worries - just stay in the team | 12:29 |
blackburn | :) | 12:29 |
@sonney2k | what is the superlu stuff? | 12:29 |
blackburn | sonney2k: sparse direct solver | 12:29 |
blackburn | I'm currently integrating it to arpack | 12:29 |
blackburn | sonney2k: the problem is LLE uses sparse weight matrix | 12:52 |
blackburn | and arpack provides reverse interface making possible to use sparse solver | 12:52 |
blackburn | that's how sklearn guys did that | 12:52 |
-!- mrsrikanth [~mrsrikant@59.92.77.64] has joined #shogun | 14:45 | |
blackburn | sonney2k: http://dl.dropbox.com/u/10139213/shogun/logos.png | 14:52 |
blackburn | any? | 14:52 |
blackburn | sonney2k: http://cs9996.vk.com/u3917169/-7/z_c97fe19b.jpg | 14:54 |
CIA-3 | shogun: Sergey Lisitsyn master * rf34ce2c / src/configure : Fixed typo in configure - http://git.io/a79CNA | 15:31 |
-!- mrsrikanth [~mrsrikant@59.92.77.64] has quit [Quit: Leaving] | 16:39 | |
blackburn | sonney2k: helloo | 19:03 |
@sonney2k | blackburn, yes? | 19:49 |
@sonney2k | blackburn, I like the 3rd one | 19:50 |
@sonney2k | who painted ti? | 19:50 |
@sonney2k | it? | 19:50 |
blackburn | sonney2k: me, who else can? | 19:50 |
@sonney2k | I guess we should have a vote on the mailinglist | 19:51 |
@sonney2k | or even a call for logos if someone thinks he can do better | 19:51 |
blackburn | sonney2k: do you know some good sparse matrix multiplication lib or so? | 19:52 |
@sonney2k | umfpack? | 19:55 |
blackburn | sonney2k: I need matrix-matrix dot product | 19:56 |
@sonney2k | what is a matrix matrix dotproduct? | 19:57 |
blackburn | ehh? | 19:57 |
blackburn | I have sparse matrix W and have to compute W'W | 19:57 |
blackburn | I realized it is the only bottleneck | 19:57 |
@sonney2k | maybe http://www.cise.ufl.edu/research/sparse/ssmult/ | 19:58 |
blackburn | uh | 19:59 |
blackburn | I guess faster to write it by myself | 20:00 |
@sonney2k | blackburn, maybe it is not that difficult | 20:00 |
@sonney2k | if you assume indices are sorted for sparse vectors | 20:00 |
@sonney2k | you could keep track of all indices in a row | 20:01 |
blackburn | sonney2k: I would write very specialized version of this product | 20:01 |
blackburn | multiplication | 20:01 |
blackburn | well I did it already but wrong | 20:01 |
@sonney2k | yeah it is not too easy | 20:59 |
blackburn | sonney2k: did it with std::list, is it ok? | 21:00 |
blackburn | sonney2k: finally our LLE became faster than sklearn's one | 21:02 |
@sonney2k | blackburn, can't you use dynarray? | 21:02 |
blackburn | sonney2k: I guess I can | 21:02 |
blackburn | 8.37s shogun lle | 21:05 |
blackburn | 11.59s scikits learn lle | 21:05 |
blackburn | sonney2k: DynArray have pretty big granularity | 21:08 |
blackburn | I need list with constant time insertion | 21:10 |
@sonney2k | you can adjust granularity though | 21:11 |
blackburn | sonney2k: I have N (number of examples) lists with no apriori known sizes | 21:12 |
@sonney2k | isn't that a bit too much? | 21:13 |
@sonney2k | N lists?! | 21:13 |
blackburn | sonney2k: how can I store non-zero indexes any other way? | 21:14 |
@sonney2k | ahh number of examples. misread taht | 21:14 |
blackburn | sonney2k: so ok to use std::list? | 21:15 |
@sonney2k | still no - but what are you doing? | 21:16 |
@sonney2k | what is in the list? | 21:16 |
blackburn | sonney2k: indexes of non zero elements of columns | 21:16 |
@sonney2k | blackburn, but then you could use dynarray and set granularity to number of non-zero elements in row your multiply with (or subsets of it) | 21:18 |
blackburn | sonney2k: ok will try | 21:19 |
blackburn | sonney2k: done | 21:27 |
blackburn | a little slower | 21:27 |
blackburn | but without your hateful std haha | 21:28 |
@sonney2k | which granularity size did you use? | 21:28 |
blackburn | m_k*2 | 21:28 |
blackburn | well k parameter twice | 21:28 |
@sonney2k | m_k ? | 21:28 |
blackburn | typically where are <m_k non zero elements | 21:29 |
blackburn | exactly m_k in a tow | 21:29 |
blackburn | row* | 21:29 |
@sonney2k | I mean you knwo the number of elements in a row | 21:29 |
@sonney2k | so that is m_k? | 21:29 |
@sonney2k | why m_k * 2? | 21:29 |
blackburn | number of non zero elements in row | 21:29 |
@sonney2k | I mean it can only become smaller | 21:29 |
blackburn | sonney2k: no, in column it can be larger | 21:30 |
@sonney2k | blackburn, yes but not in product | 21:30 |
blackburn | ehh? | 21:30 |
@sonney2k | then it is intersection of row / col indices | 21:30 |
blackburn | I do W'W | 21:31 |
@sonney2k | the number of nnz components!? | 21:31 |
blackburn | so column is multiplied on column | 21:31 |
blackburn | using dynarray costs 0.4s :) | 21:32 |
blackburn | granularity doesn't affect any speed | 21:33 |
blackburn | I would parallelize that too | 21:35 |
@sonney2k | how is that possible then?! I mean if you used huge granularity dynarray must be faster than any list | 21:35 |
blackburn | no idea, may be wrong measurement | 21:36 |
-!- blackburn [~blackburn@31.28.44.65] has quit [Read error: No route to host] | 21:56 | |
-!- blackburn [~blackburn@31.28.44.65] has joined #shogun | 21:56 | |
CIA-3 | shogun: Sergey Lisitsyn master * rf8944e0 / (2 files): Beautified dimreduction examples - http://git.io/OmoPsw | 23:28 |
CIA-3 | shogun: Sergey Lisitsyn master * r613d9dc / (2 files): Improved performance of locally linear embedding - http://git.io/BjbdOQ | 23:28 |
CIA-3 | shogun: Sergey Lisitsyn master * r7e0438e / src/shogun/preprocessor/LocallyLinearEmbedding.cpp : Removed unnecessary includes - http://git.io/fr_OXw | 23:28 |
CIA-3 | shogun: Sergey Lisitsyn master * r5ef91ed / src/shogun/preprocessor/KernelLocallyLinearEmbedding.cpp : Updated KLLE - http://git.io/nwu-Yw | 23:28 |
--- Log closed Sun Sep 11 00:00:17 2011 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!