--- Log opened Mon Jun 25 00:00:07 2012 | ||
n4nd0 | blackburn: after comparing the results with svm-struct, I am more confident that the algo is ok :) | 00:24 |
---|---|---|
blackburn | cool | 00:24 |
blackburn | n4nd0: so coincides? | 00:45 |
n4nd0 | blackburn: the values of the accuracy are similar | 00:46 |
n4nd0 | it's not a very good test to rely on it probably | 00:46 |
blackburn | accuracies are bad to compare.. | 00:46 |
n4nd0 | but but it feels better at least | 00:46 |
n4nd0 | yeah I know ... but to compare the w is not going to work out directly | 00:46 |
blackburn | why? | 00:47 |
n4nd0 | because it is not exactly the same problem | 00:47 |
n4nd0 | and I have to look a bit more on their code where the differences are | 00:47 |
n4nd0 | and what similarity to expect | 00:47 |
blackburn | heh | 00:48 |
n4nd0 | in short it is something like | 00:52 |
n4nd0 | I compare with svm-struct 1-slack, primal formulation | 00:52 |
n4nd0 | something like | 00:52 |
n4nd0 | min ||w||^2 + C/n * slack | 00:52 |
n4nd0 | we do something like | 00:52 |
n4nd0 | min ||w||^2 + C * sum_i slack_i | 00:53 |
n4nd0 | there's a theorem in the paper to say that the same w is solution of both problems when | 00:53 |
n4nd0 | slack = sum_i slack_i | 00:53 |
n4nd0 | what seems a bit obvious to tell the truth | 00:54 |
blackburn | should be different C only?? | 00:54 |
n4nd0 | C is not a problem, I can change it | 00:55 |
n4nd0 | the problem is that in their formulation | 00:55 |
n4nd0 | slack is just one variable in the optimization vector | 00:55 |
n4nd0 | mine it is a group of variables | 00:55 |
n4nd0 | and probably sum_i slack_i (result of my problem) is not the same as slack (result of their problem) | 00:56 |
n4nd0 | do you see what I mean to say? | 00:56 |
blackburn | how can that be? | 00:56 |
blackburn | it looks like both are just sums of slacks | 00:56 |
blackburn | what is different? | 00:56 |
n4nd0 | the degrees of freedom you have to do the optimization I think | 00:57 |
blackburn | min ||w||^2 + C/n * slack | 00:58 |
blackburn | what is slack here? | 00:58 |
n4nd0 | from the optimization point of view, I think it is just another variable | 00:59 |
n4nd0 | what it represents, I am not sure | 00:59 |
n4nd0 | I have not looked into it yet | 00:59 |
blackburn | only one? | 01:00 |
n4nd0 | yeah, one slack instead of one for each example | 01:00 |
blackburn | sounds pretty crazy hehe | 01:01 |
n4nd0 | maybe :) | 01:01 |
blackburn | http://svmlight.joachims.org/svm_struct.html | 01:01 |
blackburn | this one? | 01:01 |
n4nd0 | 1-slack formualtion is called | 01:03 |
n4nd0 | http://tfinley.net/research/joachims_etal_09a.pdf | 01:05 |
n4nd0 | page 9 | 01:05 |
blackburn | whoo | 01:06 |
blackburn | n4nd0: cool | 01:08 |
blackburn | n4nd0: do you know eigen library? | 01:08 |
n4nd0 | I have read something about it yes, why? | 01:09 |
blackburn | want to link it with shogun | 01:09 |
blackburn | eigen3 | 01:09 |
blackburn | this would make me able to drop off some crappy code like arpack wrapper | 01:10 |
-!- heiko [~heiko@host86-174-151-208.range86-174.btcentralplus.com] has joined #shogun | 01:38 | |
CIA-18 | shogun: Sergey Lisitsyn master * r0587779 / (11 files in 5 dirs): Merge pull request #597 from pluskid/multiclass (+24 more commits...) - http://git.io/eRBWFw | 01:49 |
-!- heiko [~heiko@host86-174-151-208.range86-174.btcentralplus.com] has quit [Ping timeout: 264 seconds] | 01:52 | |
shogun-buildbot | build #1100 of libshogun is complete: Failure [failed configure] Build details are at http://www.shogun-toolbox.org/buildbot/builders/libshogun/builds/1100 blamelist: Chiyuan Zhang <pluskid@gmail.com> | 01:53 |
n4nd0 | good night guys | 01:53 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 01:53 | |
shogun-buildbot | build #1101 of libshogun is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/libshogun/builds/1101 | 01:57 |
CIA-18 | shogun: Sergey Lisitsyn master * r1381692 / (10 files in 4 dirs): Added Multitask Logistic Regression - http://git.io/DSJA6g | 02:06 |
CIA-18 | shogun: Sergey Lisitsyn master * r43fe9aa / (10 files in 4 dirs): Merge branch 'slep' of git://github.com/lisitsyn/shogun - http://git.io/Qy5Q7Q | 02:06 |
-!- blackburn [~blackburn@31.28.43.76] has quit [Ping timeout: 252 seconds] | 02:26 | |
-!- wiking [~wiking@78-23-189-112.access.telenet.be] has joined #shogun | 02:51 | |
-!- wiking [~wiking@78-23-189-112.access.telenet.be] has quit [Changing host] | 02:51 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 02:51 | |
-!- K0stIa [~kostia@alt2.hk.cvut.cz] has joined #shogun | 04:08 | |
-!- K0stIa [~kostia@alt2.hk.cvut.cz] has left #shogun [] | 04:08 | |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 08:12 | |
-!- wiking [~wiking@b252h112.ugent.be] has joined #shogun | 08:46 | |
-!- wiking [~wiking@b252h112.ugent.be] has quit [Changing host] | 08:46 | |
-!- wiking [~wiking@huwico/staff/wiking] has joined #shogun | 08:46 | |
-!- uricamic [~uricamic@2001:718:2:1634:2806:9057:c143:ab58] has joined #shogun | 08:49 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 10:04 | |
-!- heiko [~heiko@host86-174-151-208.range86-174.btcentralplus.com] has joined #shogun | 10:22 | |
-!- alexlovesdata [c25faea9@gateway/web/freenode/ip.194.95.174.169] has joined #shogun | 11:04 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 244 seconds] | 11:31 | |
@sonney2k | heiko, not to mix up stuff again tomorrow we meet at 11 UTC right? | 11:47 |
heiko | sonney2k, ehm :) | 11:48 |
heiko | date correct, let me check the time again :) | 11:48 |
@sonney2k | heh | 11:48 |
heiko | jep | 11:48 |
@sonney2k | mail sent | 11:49 |
heiko | nice | 11:50 |
@sonney2k | wiking, any news on your PR? | 11:50 |
wiking | meet? | 11:50 |
wiking | sonney2k: there's a meeting tomorrow at 11 UTC? | 11:50 |
@sonney2k | yes | 12:05 |
-!- contact2 [9320543b@gateway/web/freenode/ip.147.32.84.59] has joined #shogun | 12:13 | |
-!- blackburn [~blackburn@31.28.43.76] has joined #shogun | 12:31 | |
blackburn | tomorrow? | 12:33 |
blackburn | 25 is today? | 12:33 |
blackburn | wtf | 12:33 |
blackburn | :D | 12:33 |
-!- pluskid [~pluskid@111.120.22.158] has joined #shogun | 12:42 | |
@sonney2k | blackburn: argh please send a correction | 12:48 |
@sonney2k | thx | 12:57 |
pluskid | sonney2k: what are the steps to export a template class to python_modular? | 12:57 |
@sonney2k | pluskid well include the .h files and then use ÷template | 12:59 |
pluskid | sonney2k: any example? | 12:59 |
pluskid | ah, maybe I can look at CDenseFeatures | 12:59 |
@sonney2k | yes | 13:00 |
@sonney2k | I am typing on my mobile | 13:00 |
pluskid | sonney2k: thanks! | 13:00 |
@sonney2k | so it's a bit hard to write much | 13:00 |
pluskid | yeah | 13:01 |
pluskid | just received the correction email of refined meeting date | 13:02 |
CIA-18 | shogun: Sergey Lisitsyn master * rf765117 / (4 files in 2 dirs): A bunch of fixes for tree part of SLEP - http://git.io/_bI8nA | 13:02 |
CIA-18 | shogun: Sergey Lisitsyn master * rcf577de / src/shogun/kernel/Kernel.cpp : Removed warning in kernel - http://git.io/z4mXfQ | 13:02 |
CIA-18 | shogun: Sergey Lisitsyn master * ra45e370 / src/shogun/transfer/multitask/Task.cpp : Rearranged initialization in CTask - http://git.io/g1fZLw | 13:07 |
-!- pluskid [~pluskid@111.120.22.158] has quit [Quit: Leaving] | 13:12 | |
CIA-18 | shogun: Sergey Lisitsyn master * rd3db2a6 / src/shogun/classifier/svm/OnlineLibLinear.cpp : Removed warning in OnlineLibLinear - http://git.io/J_8WKA | 13:13 |
-!- wiking [~wiking@huwico/staff/wiking] has quit [Quit: wiking] | 13:15 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 13:22 | |
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has joined #shogun | 13:23 | |
-!- heiko [~heiko@host86-174-151-208.range86-174.btcentralplus.com] has quit [Quit: Leaving.] | 13:41 | |
-!- pluskid [~pluskid@1.204.127.18] has joined #shogun | 13:54 | |
pluskid | Hi all | 14:10 |
pluskid | has anyone got compile error with the latest code? | 14:10 |
pluskid | http://pastebin.com/n4BV9S0T | 14:10 |
blackburn | segmentation fault?? | 14:10 |
pluskid | yeah | 14:11 |
blackburn | no I've never ever seen that | 14:11 |
pluskid | update again and try re-compiling | 14:11 |
pluskid | I also didn't see that before today | 14:11 |
-!- pluskid [~pluskid@1.204.127.18] has quit [Ping timeout: 255 seconds] | 14:21 | |
-!- pluskid [~pluskid@202.130.113.141] has joined #shogun | 14:22 | |
n4nd0 | pluskid: hi, is the error fixed after re-compilation? | 14:30 |
pluskid | n4nd0: no, any idea of this? | 14:30 |
n4nd0 | pluskid: it can be because a couple of parts in Machine.i needs to have and ifdef there | 14:31 |
pluskid | ifdef what? | 14:32 |
n4nd0 | pluskid: USE_MOSEK | 14:32 |
pluskid | hmm... | 14:32 |
n4nd0 | pluskid: check it in Structure.i | 14:32 |
n4nd0 | #ifdef USE_MOSEK ... #endif | 14:32 |
n4nd0 | pluskid: can you please try if it fixes it? | 14:32 |
pluskid | ok | 14:33 |
n4nd0 | my guess is that it should go around APPLY_STRUCTURED(CPrimalMosekSOSVM); and %rename(apply_generic) CPrimalMosekSOSVM::apply(CFeatures* data=NULL); | 14:34 |
pluskid | still segfault | 14:35 |
pluskid | trying to clean and re-make | 14:35 |
n4nd0 | ok | 14:35 |
n4nd0 | I'd say that make clean and make should make it work | 14:36 |
n4nd0 | since that code compiled in the buildbot | 14:36 |
n4nd0 | haha make should make it work lol | 14:36 |
pluskid | :D | 14:36 |
pluskid | :( | 14:38 |
pluskid | still segfault | 14:38 |
pluskid | I'll be right back soon | 14:38 |
n4nd0 | ok | 14:39 |
n4nd0 | we can check later if it's a problem related to swig's version | 14:39 |
n4nd0 | blackburn: your Machine.i has also the lines ^ with CPrimalMosekSOSVM right? | 14:41 |
blackburn | n4nd0: yes | 14:41 |
n4nd0 | blackburn: and no problems compiling? | 14:41 |
blackburn | no segfaults for sure | 14:41 |
n4nd0 | blackburn: ok | 14:41 |
blackburn | do you have segfaults too? | 14:41 |
n4nd0 | no no | 14:41 |
n4nd0 | but I have my configure prepared to use those files | 14:41 |
n4nd0 | PrimalMosekSOSVM.h and so | 14:42 |
n4nd0 | but probably you and pluskid don't have it | 14:42 |
blackburn | yeah we have no mosek | 14:44 |
-!- heiko [d4550102@gateway/web/freenode/ip.212.85.1.2] has joined #shogun | 15:00 | |
heiko | test. | 15:06 |
blackburn | .tset | 15:06 |
heiko | yeh :) works | 15:07 |
heiko | I am in a local library | 15:07 |
blackburn | ай эм ин э локал либрари | 15:07 |
heiko | cannot stay in my room any longer ;) | 15:07 |
blackburn | каннот стай ин май рум ани лонгер | 15:07 |
heiko | unfortunately, only http proxy so no git straightforward | 15:07 |
blackburn | heh | 15:08 |
n4nd0 | blackburn: are you teaching us some Russian? :) | 15:09 |
blackburn | n4nd0: I just wrote what heiko wrote in transliteration :D | 15:10 |
n4nd0 | haha | 15:10 |
heiko | ;) | 15:10 |
n4nd0 | that would be cool | 15:10 |
n4nd0 | if Russian is just English with other characters :P | 15:10 |
blackburn | йес ит кан би | 15:10 |
blackburn | иф руссиан ис джаст инглиш виз озер карактерс | 15:11 |
blackburn | n4nd0: btw about spain - it was the most boring match ever | 15:12 |
blackburn | shame on you :D | 15:13 |
n4nd0 | blackburn: :O | 15:13 |
n4nd0 | shame on France | 15:13 |
n4nd0 | Spain did its game | 15:13 |
blackburn | spa - por should be interesting | 15:13 |
blackburn | n4nd0: have you seen eng - ita yesterday? | 15:14 |
pluskid | n4nd0: do I have to re-run ./configure script? | 15:16 |
n4nd0 | blackburn: yeah | 15:16 |
n4nd0 | blackburn: I was woing to ask you about that :) Pirlo is a brave guy | 15:16 |
n4nd0 | pluskid: I don't think so | 15:16 |
pluskid | hmm... | 15:17 |
blackburn | n4nd0: really great match | 15:17 |
n4nd0 | blackburn: ?? | 15:17 |
blackburn | ?? | 15:17 |
blackburn | :) | 15:17 |
n4nd0 | blackburn: Italy against a wall | 15:17 |
pluskid | I'm using swig 2.0.7-1, what's your version? | 15:17 |
n4nd0 | blackburn: they should have made a new feature for the match statistics, shoots to the defenders... | 15:18 |
blackburn | n4nd0: well but still interesting | 15:18 |
blackburn | a lot of shots | 15:18 |
n4nd0 | pluskid: SWIG Version 2.0.4 | 15:18 |
n4nd0 | pluskid: I don't know if that may be a problem | 15:18 |
pluskid | hmm | 15:18 |
n4nd0 | blackburn: what is your swig version? | 15:18 |
n4nd0 | blackburn: do you know buildbot's one too? | 15:19 |
blackburn | mine is 2.0.4 | 15:19 |
blackburn | builbot has something similar I think | 15:19 |
n4nd0 | blackburn: I think that the match was ok, I had more fun because I was at home with my Italian friend :D | 15:19 |
blackburn | pluskid: where did you get 2.0.7 :D | 15:19 |
blackburn | ah you have arch linux right | 15:19 |
n4nd0 | mmm I don't really know if the problem can be caused because of swig's version | 15:20 |
pluskid | yeah | 15:20 |
pluskid | I failed to compile after git rebase to the latest code | 15:20 |
pluskid | maybe I can try to find what's incompatible here by binary search in the git history | 15:20 |
n4nd0 | after doing make clean | 15:20 |
n4nd0 | ? | 15:20 |
pluskid | yeah, I've done make clean | 15:21 |
pluskid | maybe also try with clang-swig | 15:21 |
pluskid | -.-bb | 15:21 |
pluskid | ah, no such thing "clang-swig" | 15:21 |
n4nd0 | what I don't understand is that the problem seems to be caused by the lines in Machine.i that use CPrimalMosekSOSVM | 15:21 |
n4nd0 | but if one does ifdef USE_MOSEK they shouldn't be executed | 15:22 |
pluskid | but I've wrapped with ifdef as you suggested | 15:22 |
n4nd0 | yeah, that's why | 15:22 |
n4nd0 | with ifdef, they shouldn't be executed by swig | 15:22 |
n4nd0 | because you are not compiling with such a USE_MOSEK, are you? | 15:23 |
pluskid | no | 15:23 |
n4nd0 | mmm ok | 15:23 |
n4nd0 | so if ifdef didn't solve it, I'd remove it | 15:23 |
pluskid | seems they are not being executed | 15:23 |
pluskid | because I no longer get the warning %extend defined for an undeclared class shogun::CPrimalMosekSOSVM. | 15:24 |
pluskid | but still segfault | 15:24 |
n4nd0 | did you do make clean after adding them? | 15:24 |
n4nd0 | aaah | 15:24 |
pluskid | I think I did | 15:24 |
pluskid | but I can try again | 15:24 |
n4nd0 | that the extend thing is just a warning | 15:24 |
n4nd0 | I thought it was the direct cause of the segfault | 15:24 |
n4nd0 | then probably even blackburn and the buildbot get that warning | 15:24 |
blackburn | yes I get it | 15:25 |
pluskid | anyway, no error message, just segfault seems weird | 15:25 |
blackburn | debug it with valgrind :D | 15:25 |
pluskid | my gcc isn't compiled with debug information | 15:26 |
pluskid | neither do swig :p | 15:26 |
n4nd0 | pluskid: give 5 min and I will rebase the latest code and compile | 15:26 |
pluskid | n4nd0: thanks! | 15:26 |
n4nd0 | pluskid: maybe I join you in the segfault club | 15:26 |
pluskid | haha, welcome! | 15:26 |
blackburn | first rule of segfault club | 15:28 |
n4nd0 | pluskid: for what interfaces is your shogun configured? | 15:33 |
pluskid | n4nd0: only python_modular | 15:33 |
n4nd0 | pluskid: no segfault here | 15:39 |
pluskid | n4nd0: :-/ , then all I can do is binary search now | 15:39 |
n4nd0 | pluskid: is your swig version stable? | 15:40 |
pluskid | n4nd0: not sure, Arch has some bleeding edge packages... | 15:40 |
n4nd0 | aham | 15:40 |
pluskid | I have habits to do system update everyday | 15:42 |
pluskid | first I'll revert to an old version where I remember I can compile correctly to see if there's something wrong with my swig | 15:43 |
pluskid | compiled after checking out back to a very old version... | 15:52 |
pluskid | Jun. 09 | 15:52 |
n4nd0 | oh, that's old | 15:54 |
n4nd0 | but it's good that you can compile now | 15:54 |
pluskid | trying some newer version | 15:57 |
pluskid | compiles on Jun. 23 | 16:01 |
pluskid | Merge pull request #599 from cwidmer/master | 16:01 |
pluskid | failed on Jun. 23 | 16:04 |
pluskid | Merge pull request #596 from iglesias/so | 16:04 |
pluskid | n4nd0: seems something in the so is incompatible in my local box | 16:05 |
pluskid | luckily there are only 7 commits between | 16:05 |
pluskid | I'm afraid I have to try them later | 16:06 |
pluskid | currently the thunder is toooooooooooo threatening here... | 16:06 |
pluskid | not sure it will do harm to my computer | 16:06 |
pluskid | turn off my laptop for safety | 16:06 |
pluskid | :D | 16:06 |
-!- pluskid [~pluskid@202.130.113.141] has quit [Quit: Leaving] | 16:06 | |
n4nd0 | pluskid: it must be something related to swig | 16:07 |
n4nd0 | at least that is my guess | 16:07 |
n4nd0 | probably in the commit + so for modular interfaces + python example | 16:07 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Quit: leaving] | 16:12 | |
-!- blackburn [~blackburn@31.28.43.76] has quit [Ping timeout: 245 seconds] | 16:12 | |
-!- contact2 [9320543b@gateway/web/freenode/ip.147.32.84.59] has quit [Quit: Page closed] | 17:07 | |
-!- uricamic [~uricamic@2001:718:2:1634:2806:9057:c143:ab58] has quit [Quit: Leaving.] | 17:25 | |
-!- heiko [d4550102@gateway/web/freenode/ip.212.85.1.2] has quit [Quit: Page closed] | 17:29 | |
-!- blackburn [b22d393f@gateway/web/freenode/ip.178.45.57.63] has joined #shogun | 17:46 | |
-!- puffin444 [62e3926e@gateway/web/freenode/ip.98.227.146.110] has joined #shogun | 17:47 | |
-!- gsomix [~gsomix@95.67.169.72] has joined #shogun | 18:11 | |
gsomix | exams are over! | 18:12 |
gsomix | hi all | 18:12 |
puffin444 | That's wonderful gsomix! | 18:15 |
-!- blackburn [b22d393f@gateway/web/freenode/ip.178.45.57.63] has quit [Ping timeout: 245 seconds] | 18:17 | |
-!- blackburn [~blackburn@31.28.43.76] has joined #shogun | 18:51 | |
-!- heiko [~heiko@host86-180-159-168.range86-180.btcentralplus.com] has joined #shogun | 18:55 | |
@sonney2k | blackburn, what was pluskid's compile segfault about? | 18:57 |
@sonney2k | blackburn, did gcc segfault? | 18:57 |
blackburn | sonney2k: no idea he didn't solve that yet | 18:57 |
blackburn | sonney2k: yeap | 18:57 |
@sonney2k | blackburn, then his memory is RIP | 18:57 |
blackburn | sonney2k: wait but with earlier versions it was ok | 18:58 |
@sonney2k | guess he needs to buy new ram | 18:58 |
blackburn | strange | 18:58 |
@sonney2k | blackburn, oder version of gcc? | 18:58 |
blackburn | older git | 18:58 |
blackburn | revision | 18:59 |
blackburn | hahah | 18:59 |
blackburn | forgot the word | 18:59 |
@sonney2k | blackburn, then I still vote for RAM | 18:59 |
blackburn | is it from your experience? | 18:59 |
@sonney2k | yes | 18:59 |
blackburn | ok we ll see | 19:01 |
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has quit [Ping timeout: 265 seconds] | 19:04 | |
CIA-18 | shogun: Sergey Lisitsyn master * r9264c84 / (2 files in 2 dirs): Updated multitask logistic regression - http://git.io/3PjWjQ | 19:05 |
CIA-18 | shogun: Sergey Lisitsyn master * r6e86d28 / (2 files in 2 dirs): Merge branch 'slep' of git://github.com/lisitsyn/shogun - http://git.io/fkHZgg | 19:05 |
blackburn | sonney2k: we all forgot about weekly reports | 19:09 |
blackburn | :D | 19:09 |
blackburn | whoops something is wrong with my gitconfig | 19:10 |
@sonney2k | blackburn, well then hurry up...! | 19:10 |
CIA-18 | shogun: Sergey Lisitsyn master * r845c6c5 / src/shogun/transfer/multitask/Task.h : Added dummy task doc - http://git.io/8KgrvQ | 19:12 |
blackburn | argh | 19:13 |
CIA-18 | shogun: Sergey Lisitsyn master * r19a6e35 / src/shogun/lib/SGSparseMatrix.h : Removed useless nulling - http://git.io/p2HBFQ | 19:18 |
blackburn | hooray | 19:18 |
-!- heiko [~heiko@host86-180-159-168.range86-180.btcentralplus.com] has quit [Ping timeout: 272 seconds] | 19:42 | |
-!- heiko1 [~heiko@host86-183-74-41.range86-183.btcentralplus.com] has joined #shogun | 19:42 | |
puffin444 | heiko1, is that you or an evil clone of heiko? | 19:43 |
-!- heiko2 [~heiko@host86-180-159-222.range86-180.btcentralplus.com] has joined #shogun | 19:46 | |
heiko2 | puffin444, how are things? | 19:46 |
blackburn | triple heiko | 19:47 |
puffin444 | things are going quite well. I am very close to making a pull request for the model selection stuff | 19:47 |
heiko2 | nice | 19:47 |
-!- heiko1 [~heiko@host86-183-74-41.range86-183.btcentralplus.com] has quit [Ping timeout: 272 seconds] | 19:47 | |
heiko2 | the thing you stumbled over took me some hours already, hope to get that fixed soon | 19:48 |
heiko2 | there were two problems, one easy one (already fixed, have to submit patch though) and one harder one (concept problem) | 19:48 |
heiko2 | Are you currently using the get_combinations method? | 19:49 |
puffin444 | no | 19:49 |
puffin444 | I am using a new method I added, get_random_combination | 19:50 |
heiko2 | ok, then you should have no problems with that | 19:50 |
puffin444 | It just returns one random combination | 19:50 |
puffin444 | Yeah it all works | 19:50 |
heiko2 | why do you need that? | 19:50 |
puffin444 | I use a random combination to start the gradient search | 19:51 |
heiko2 | ah ok | 19:51 |
heiko2 | and that might be replaced by a grid-search later right? | 19:51 |
puffin444 | Yes. It might be combined somehow with grid search | 19:51 |
heiko2 | and how is the class structure for the modsel stuff now? | 19:53 |
puffin444 | There is now a superclass called MachineEvaluation. CrossValidation and GradientEvaluation inherit from this class | 19:54 |
heiko2 | so no changes from the stuff we agreed on last time we talked? | 19:54 |
puffin444 | Nope. | 19:54 |
puffin444 | The gradient selection model as it stands can be used with any machine as long as it has a corresponding differentiable function. | 19:55 |
heiko2 | Thats very cool | 19:55 |
heiko2 | I think this will make things so much easier in the future | 19:56 |
puffin444 | I think so. I was surprised actually by the small amount of work required to get hyperparameter learning up and running. This was the part of the project that worried me the most. | 19:56 |
heiko2 | The horrors are always in the details :) | 19:57 |
puffin444 | Yes :) | 19:57 |
heiko2 | Say, do you plan to do this automatic relevance determination stuff? | 19:58 |
puffin444 | Not sure what your're talking about actually. | 19:59 |
heiko2 | if your data has multiple dimensions, one could optimise sigma in every dimension seperately | 19:59 |
heiko2 | rather than using the same value for each | 20:00 |
puffin444 | Oh. I'm not sure about that. I'll see what Oliver thinks. The way its structured now though it wouldn't be too hard to add. | 20:00 |
heiko2 | let me find the page in the book where I saw that | 20:01 |
heiko2 | I like it because it has this nice feature-selection interpretation (dimensions that do not help are wiped out) | 20:01 |
puffin444 | I think it would just be a matter of adding/modifiying an inference method class and a likelihood model class that calculates a different gradient. | 20:02 |
puffin444 | Nothing in the model selection framework would have to be changed :) | 20:02 |
heiko2 | well basically in the model-selection chapter | 20:02 |
heiko2 | probably yes | 20:03 |
heiko2 | I mean yes youre right | 20:03 |
puffin444 | By the way the NLOPT and the optimizer in the GPML are giving me answers which are exactly the same | 20:05 |
heiko2 | yes I saw that, awesome :D | 20:06 |
puffin444 | Very reassuring :) | 20:06 |
heiko2 | indeed | 20:06 |
heiko2 | what problem did you try? | 20:06 |
-!- puffin444_ [62e3926e@gateway/web/freenode/ip.98.227.146.110] has joined #shogun | 20:07 | |
puffin444_ | sorry about that | 20:08 |
puffin444_ | It was sort a simple case (with some variations in the training vectors) | 20:08 |
puffin444_ | Even with radical changes in individual training vectors I consistently got the same answers | 20:08 |
heiko2 | thats great then | 20:08 |
heiko2 | perhaps it would be good to test it on different kinds of problems (easy/hard, different scales etc) | 20:09 |
heiko2 | you could also assert for correct results in order to detect errors when the framework is changes (these sometimes arent found immediately) | 20:10 |
-!- puffin444 [62e3926e@gateway/web/freenode/ip.98.227.146.110] has quit [Ping timeout: 245 seconds] | 20:10 | |
puffin444_ | what | 20:10 |
heiko2 | ? | 20:10 |
puffin444_ | I guess the irc bot reports these things late | 20:10 |
puffin444_ | hah | 20:10 |
heiko2 | oh, yes :) what did you read? | 20:11 |
puffin444_ | == puffin444 [62e3926e@gateway/web/freenode/ip.98.227.146.110] has quit [Ping timeout: 245 seconds] | 20:11 |
puffin444_ | as I was logged in :) | 20:11 |
heiko2 | (19:08:53) heiko: thats great then | 20:11 |
heiko2 | (19:09:16) heiko: perhaps it would be good to test it on different kinds of problems (easy/hard, different scales etc) | 20:11 |
heiko2 | (19:10:09) heiko: you could also assert for correct results in order to detect errors when the framework is changes (these sometimes arent found immediately) | 20:11 |
puffin444_ | Yes I do want to test both the GP and the gradient search it on a variety of data sets, | 20:12 |
heiko2 | ok then, I will have a look at the modsel code once you commit it | 20:13 |
puffin444_ | Yeah there are some details I do have questions about. | 20:14 |
-!- alexlovesdata [c25faea9@gateway/web/freenode/ip.194.95.174.169] has quit [Ping timeout: 245 seconds] | 20:14 | |
heiko2 | Now? | 20:14 |
puffin444_ | For example currently the generalize framework makes an evaluation and returns a general result which must be downcast. | 20:15 |
puffin444_ | Just a few small things. | 20:15 |
puffin444_ | about coding styles, decisions, etc. | 20:15 |
heiko2 | we usually solve these problem with enums, like in CKernel | 20:16 |
heiko2 | but ok | 20:16 |
heiko2 | let me know when you submit, I am really interested in the gradient stuff | 20:16 |
puffin444_ | Then it might be beneficial for an enum type system to be added for Evaluation Results. I'll ask a few of these questions with the pull request. | 20:17 |
puffin444_ | I definitely will. I just want to clean up a little of the code and check for memory bugs and then I will submit. | 20:17 |
heiko2 | alright | 20:17 |
puffin444_ | Sorry about some of the coding conventions. I'll make sure to clean that up. | 20:18 |
heiko2 | No worries :) | 20:19 |
puffin444_ | Oliver was going to join us, but I just got word he's stuck on a late train. | 20:20 |
heiko2 | ah ok | 20:20 |
heiko2 | send him my greetings, I share the pain with the deutsche bahn :) | 20:21 |
puffin444_ | Hah I'm sure its better than Amtrack | 20:22 |
heiko2 | hehe, train companies seem to be horrible everywhere ;) | 20:22 |
heiko2 | alright then, I have to leave soon, have a good evening | 20:24 |
heiko2 | ehm, is it evening for you? | 20:24 |
heiko2 | :) | 20:24 |
heiko2 | dont know | 20:24 |
puffin444_ | It's early afternoon :). | 20:25 |
puffin444_ | Have a good evening! | 20:25 |
heiko2 | thanks :) take care, bye | 20:26 |
-!- os252 [55b3f5d9@gateway/web/freenode/ip.85.179.245.217] has joined #shogun | 20:32 | |
os252 | puffin444: sorry; now online. | 20:32 |
puffin444_ | Hi oliver | 20:32 |
os252 | hi | 20:32 |
os252 | did you guys discuss things already? | 20:33 |
os252 | Sorry; I missed a connection and then had to wait for some time. | 20:33 |
blackburn | puffin444_: gsomix: guys we totally forgot about weekly reports - could you please? | 20:33 |
puffin444_ | We talked about the model selection framework. | 20:33 |
-!- heiko2 [~heiko@host86-180-159-222.range86-180.btcentralplus.com] has quit [Ping timeout: 272 seconds] | 20:33 | |
puffin444_ | blackburn, will do. | 20:33 |
puffin444_ | We talked about the model selection framework. | 20:34 |
puffin444_ | It's done now. I just want to clean up a little bit of the code and look for memory leaks before I submit a pull request. | 20:34 |
puffin444_ | Hyperparameter learning is now an option for GPs, and the answer from NLOPT corresponds almost exactly to that of GPML. | 20:35 |
os252 | graet. | 20:36 |
os252 | answer = marginal likelihood and parameter settings? | 20:36 |
puffin444_ | Yes. I get an exact same answer for marginal likelihood. Parameter settings are the same for the most part. There is some variability with sigma but the sigma tends to be very small (1e-7) range. | 20:37 |
os252 | ok, that's most likely numerical.. nice. | 20:38 |
os252 | How did you end up handling the scale of the kernel? | 20:38 |
puffin444_ | I have not tackled that problem yet. | 20:39 |
os252 | so the scale is =1 ? | 20:39 |
puffin444_ | Yes as of now. | 20:39 |
os252 | wondering how you can then get the same results than gpml which learns the scale. | 20:39 |
os252 | or do you simulated from that kernel? | 20:40 |
puffin444_ | I am using the kernel which does not have a scale in GPML | 20:40 |
os252 | I see | 20:40 |
os252 | ok | 20:40 |
os252 | The scale is quite crucial. | 20:40 |
os252 | One can either introduce scaled kernels in shogun or need to rescale them in the gp classes. | 20:41 |
os252 | Not sure what is better. | 20:41 |
puffin444_ | I see. So can the scale be independent of kernel type? Within the GP classes? | 20:41 |
os252 | Introducing scaling in shogun kernels would seem cleanest but may break quite a few tings. | 20:41 |
os252 | it could | 20:41 |
os252 | just like the noise is added to the diagonal you can rescale the kernel matrix. | 20:41 |
os252 | relative scaling if you hav a sum of kernels is done by the combinator class. | 20:42 |
os252 | It's probably the simplest solution given the existing shogun code base | 20:42 |
puffin444_ | It might just be safer at this point to put it in the GP code. It should be very easy to do. | 20:43 |
os252 | yes | 20:43 |
os252 | sounds good. | 20:43 |
os252 | also: those trivial parameters I'd all put on log scale. | 20:43 |
os252 | If you have the choice (as for this one) | 20:43 |
os252 | You may also wonder whether it makes sense to implement the range restrictions in the kernel funciton, i.e. have a queriable interface where you automatically determine the legal range for each parameter. | 20:44 |
os252 | But that's rather nice to have. | 20:44 |
os252 | I think given where you are the priorities are probably : | 20:44 |
os252 | * finish the parameter learning machinery as you had planed | 20:44 |
os252 | * add scaling of kernels | 20:44 |
os252 | * add uncertainty to apply_regression | 20:44 |
os252 | * make a few demos to look at | 20:45 |
os252 | * caching (see my email) | 20:45 |
-!- gsomix [~gsomix@95.67.169.72] has quit [Remote host closed the connection] | 20:45 | |
os252 | * get more kernels that have derivatives (the combinator is very useful to have sums of kernels) | 20:45 |
os252 | That's a suggestion. With these items, the core part of the project would be sort of done. | 20:46 |
puffin444_ | Hey oliver, which week are we on for the GSoC? | 20:46 |
puffin444_ | Did we just finish up the 5th? | 20:47 |
os252 | I think so | 20:47 |
os252 | So still plenty of time :-) | 20:48 |
puffin444_ | Okay, so according to the original plan after this I move on the Sparse Approximation | 20:48 |
os252 | But you also had ambitious plans. | 20:48 |
puffin444_ | Yes, :) | 20:48 |
os252 | Yeah, you may want to rethink this a little. Once the basics work, which is now the case - most things get faster. | 20:49 |
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has joined #shogun | 20:49 | |
puffin444_ | I definitely first need to work on some these things you mentioned. | 20:49 |
os252 | A question is also how many of the kernels to make GP ready, including derivatives. That could also be a substantial time sink. | 20:49 |
puffin444_ | Yes. | 20:49 |
os252 | Perhaps worth consulting Soeren which priorities he'd suggest. | 20:50 |
os252 | If it takes now an extra week to get this all solid and the basics working that seems all fine. I can't imagine it'll take you more than 2 weeks to get most of this done. | 20:51 |
os252 | There is also plenty of scope to move some of these things to the sparse part then and get it done while implementing these things | 20:51 |
puffin444_ | Yes. | 20:51 |
os252 | (for example caching, which is sort of a speed issue -> sparse as well) | 20:51 |
puffin444_ | So in my progress report how about I choose these as my goals for this week: | 20:52 |
puffin444_ | 1. Add Scaling of kernels | 20:53 |
puffin444_ | 2. Allow apply_regression in GaussianProcessRegression to return variances as well as means | 20:53 |
puffin444_ | 3. Add a few visual demos | 20:53 |
puffin444_ | 4. Make the calcuations in GaussianProcessRegression more efficient. | 20:53 |
puffin444_ | (caching) | 20:53 |
puffin444_ | 5. Add derivatives to some more kernels. | 20:53 |
os252 | I was hoping you stop here ;-) | 20:54 |
puffin444_ | yes | 20:54 |
puffin444_ | sorry :) | 20:54 |
os252 | Yes, sounds good. Ask for guidance regarding which kernels. | 20:54 |
puffin444_ | I will. | 20:54 |
@sonney2k | puffin444_, gaussian, poly, linear | 20:54 |
os252 | I definitely would add linear and the combinator (sum) | 20:54 |
@sonney2k | that's it | 20:54 |
puffin444_ | Gaussian is already there, Will add poly, linear, and combinator | 20:55 |
os252 | Thanks sonney2k. Definitely a good start. But let's add sum - which is the real advantage of GPs over SVMs ;-) | 20:55 |
os252 | puffin444: great. | 20:55 |
@sonney2k | sum == combined kernel? | 20:55 |
puffin444_ | ok | 20:56 |
os252 | It's called combined in shogun, yes. | 20:56 |
@sonney2k | \sum_i beta_i K_i(x,x') ? | 20:56 |
@sonney2k | k | 20:56 |
@sonney2k | yes then | 20:56 |
@sonney2k | but then each kernel already has a weight | 20:56 |
blackburn | what is advantage over svm? | 20:56 |
@sonney2k | though only used in combined kernel | 20:56 |
os252 | It's a bit annoying to implement because you need to take care of derivatives and call the code of relevant kernel functions that are part of the combinator. | 20:56 |
@sonney2k | os252, I don't think it is hard ... | 20:57 |
os252 | blackburn: well you can just handle large number of kernel parameters as you don't need to cross validate anything, i.e. can realistically lern the releative weights of many components. | 20:57 |
@sonney2k | os252, we want some *fancy* example | 20:57 |
os252 | sonney2k, you mean nice demos? | 20:58 |
os252 | We should definitely have some nice regression demos with sum and product kernels; those are nice for a large class of regression problems. | 20:58 |
blackburn | os252: sorry I am totally lame in GPs - so you don't have to do xval to get the best parameters values? | 20:59 |
puffin444_ | I'll put in some demos | 20:59 |
os252 | blackburn, right - yes. You still can (if you have few of them and just want to predict) but the standard is to optimize marignal likelihod, | 20:59 |
os252 | i.e. p(y | x, K, \theta) where \theta is the set of kernel parameters. | 20:59 |
blackburn | I see | 20:59 |
@sonney2k | I am also no GP person, os252 how is this optimized? | 21:00 |
os252 | So merely finding the most probable state of parameters is an easy (optimization) problem and hence you can realistically make use of far more complex kernels than when you need a grid... | 21:00 |
@sonney2k | os252, but don't you need to have some priors for that then? | 21:00 |
os252 | You can put some on, but don't need to (given enough data). | 21:00 |
@sonney2k | or how do you define the optimization problem? | 21:00 |
os252 | p(y| x,K,\theta) = N(y| 0, K(x,theta)) | 21:01 |
os252 | K: choice of kernel ( complicated convoluted thing) | 21:01 |
os252 | x: inputs | 21:01 |
os252 | theta: parameter of K | 21:01 |
os252 | \hat{theta} = \argmax \ln N(y| 0, K(x,\theta)) | 21:01 |
@sonney2k | N == normal distribution I guess | 21:01 |
os252 | If you know the gradients of K w.r.t all \theta_i you can easily calculate gradients of \ln N(...) | 21:02 |
os252 | yes, N = normal | 21:02 |
os252 | And then stick the whole thing in a gradeint-based optimizer. It's not convex for almost any choice of K but local optima are generally pretty good. | 21:02 |
os252 | puffin444 is almost there. You can soo give it a go pretty soon ;-) | 21:03 |
blackburn | are you going to use l-bfgs-b for that? | 21:04 |
@sonney2k | os252, sorry for asking a stupid question , but you determine mean and covariance matrix for N right? | 21:04 |
@sonney2k | is O the mean? | 21:04 |
@sonney2k | an K() the cov? | 21:04 |
os252 | puffin444: NLOPT/ bfgsb decided? NLOPT has an lbfgsb implementation in it and much more... so I'd favor that. | 21:05 |
os252 | sonney2k. | 21:05 |
puffin444_ | NLOPT is in there right now. | 21:05 |
blackburn | I did that for you :D | 21:05 |
puffin444_ | Works beautifully | 21:05 |
os252 | So mean can be non-zero. We'll have a few simple mean functions (most simply variable constant). | 21:05 |
puffin444_ | thanks blackburn. | 21:05 |
os252 | In practice actually zero-meaning data is good enough for most. | 21:06 |
os252 | The covaraince is constructed from the kernel. | 21:06 |
blackburn | sonney2k: ah while you are there | 21:06 |
os252 | If you knot the parameters (\theta) | 21:06 |
blackburn | sonney2k: have you seen eigen3? | 21:06 |
os252 | K_{i,j} = k(x_i,x_j) + \delta_{i==j} \sigma^2 | 21:06 |
os252 | or better | 21:07 |
os252 | K_{i,j} = k(x_i,x_j | \theta ) + \delta_{i==j} \sigma^2 | 21:07 |
@sonney2k | os252, ok | 21:07 |
@sonney2k | blackburn, yes | 21:07 |
os252 | all you do is now optimize the marginal likelihood w.r.t to \theta, \sigma^2 which in turn determine the covariance matrix and hence the likelihood of the data. | 21:07 |
@sonney2k | blackburn, wanted to use it for shogun at some point - but it is not optimized for many CPU archs | 21:08 |
@sonney2k | os252, the principle is clear now | 21:08 |
blackburn | sonney2k: do you need many? | 21:08 |
os252 | sonney2k: Is it really still an issue? [Eigen] | 21:08 |
os252 | I love Eigen. Suddenly I can read my C++ maths code and can virtually copy and paste python -> C++. | 21:08 |
@sonney2k | blackburn, not personally no | 21:09 |
blackburn | I would like to have it here | 21:09 |
os252 | Since the last realease you can build Intel MKL (and by the next release any Blas/Lapack conform library) under neith it either.. | 21:09 |
@sonney2k | blackburn, so you want to drop atlas/lapack? | 21:09 |
os252 | In case you are worried about performance. | 21:09 |
blackburn | sonney2k: well lapack has only a few of capabilities of eigen3 | 21:10 |
@sonney2k | os252, from the benchmarks I've seen it is faster on modern archs sseN etc | 21:10 |
blackburn | so if you are ok with it I would even use it for new code | 21:11 |
os252 | Yeah.. | 21:11 |
os252 | Just saw Eigen 3.1 is out. | 21:11 |
os252 | So they will now definitely go for dual interfaciting. | 21:11 |
os252 | Either Eigen with blas interface or vice versa which is nice. | 21:11 |
blackburn | os252: do you know whether they provide standart blas interface? | 21:12 |
os252 | I find it great to use; use it for virtually all my projects now. | 21:12 |
os252 | they do! | 21:12 |
blackburn | cblas_dgemm blabla | 21:12 |
os252 | I think they provide the full interface and use it for their benchmarks. | 21:12 |
blackburn | then I would like to drop atlas/lapack dependency | 21:12 |
blackburn | sonney2k: | 21:12 |
blackburn | ^ | 21:12 |
os252 | Yes, take a look at that. It will reduce compile headache a lot. | 21:13 |
os252 | THe sole downside (unfortunately not minor): Eigen is a header-library and hence definitely does not decrease compile time. | 21:13 |
blackburn | os252: I thought you are more math person :) | 21:13 |
os252 | That may be an issue given the size of shogun. I am not sure how fast it is though if you are using the blas stuff (that should be compilable). | 21:13 |
@sonney2k | well if we can avoid using it in *headers* it won't make a difference | 21:14 |
blackburn | os252: most time consuming are interfaces | 21:14 |
@sonney2k | because interfaces are generated from headers only | 21:15 |
os252 | Well, like maths that still runs fastish (genetics -> GWAS -> terrabytes of data -> waiting .... | 21:15 |
blackburn | sonney2k: I need your decision then :) | 21:15 |
os252 | I guess if you continue using shogun-internal matrices and arrays you can. | 21:15 |
os252 | (i.e. don't write public functions that take eigen arrays as arguments). | 21:16 |
blackburn | we don't really need to use shogun-internal matrices in solvers and etc | 21:16 |
blackburn | usually it is double* here | 21:16 |
os252 | ok.. then all good. | 21:16 |
os252 | You can create Eigen Matrices from a double* etc. in functions. | 21:17 |
@sonney2k | blackburn, err you want to do that before the end of gsoc? | 21:17 |
@sonney2k | to me it sounds like something to do after | 21:17 |
blackburn | sonney2k: yeah why not | 21:17 |
blackburn | it would speed up my development | 21:17 |
@sonney2k | we only have 2 months before we want to release?! | 21:17 |
blackburn | sonney2k: it changes nothing | 21:18 |
blackburn | blas and lapack would still work | 21:18 |
@sonney2k | I don't like a halfway solution ... can one emulate lapack/blas calls? | 21:19 |
blackburn | sonney2k: interface is provided! | 21:19 |
blackburn | already | 21:19 |
blackburn | just as it is | 21:19 |
@sonney2k | as in create a Matrix from double? | 21:19 |
@sonney2k | double*, n, m | 21:19 |
blackburn | sonney2k: we don't have to | 21:20 |
blackburn | halfway is ok here I think | 21:20 |
blackburn | we don't need to change everything right now | 21:20 |
os252 | yes... it has lapak/blas interfaces. and interfacting from double* is easy. | 21:20 |
os252 | You could just add it in and switch over for bits and pieces | 21:20 |
blackburn | I would like to use it only in new code for now | 21:21 |
blackburn | I am doing a lot of vectorized algos now | 21:21 |
@sonney2k | blackburn, it is another huge depency just for a few lines of code | 21:21 |
blackburn | sonney2k: we can remove atlas dependency | 21:21 |
@sonney2k | os252, any example? | 21:21 |
blackburn | sudo apt-get install eigen3 is not any difficult | 21:22 |
os252 | Map<MatrixXf> mf(double *,rows,columns); | 21:22 |
os252 | :-) | 21:22 |
puffin444_ | Oliver, I will get to work on these things ASAP. | 21:23 |
blackburn | sonney2k: check what I do in shogun/lib/slep/slep_mt_lsr.cpp - basically vector additions and etc | 21:23 |
os252 | puffin444, ok great! | 21:23 |
os252 | It's just a bunch of headers. YOu can even just copy them into shogun. There is no compiled piece of code in there. | 21:23 |
blackburn | I would really like to use some thing like eigen3 here - it is slower to use for loops here (both to develop and compute) | 21:24 |
-!- puffin444_ [62e3926e@gateway/web/freenode/ip.98.227.146.110] has quit [Quit: Page closed] | 21:24 | |
@sonney2k | os252, fair enough | 21:28 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 21:29 | |
blackburn | sonney2k: there was an example how its api works | 21:32 |
blackburn | they showed that all the abstraction go away on compilation | 21:32 |
os252 | Here: http://eigen.tuxfamily.org/index.php?title=API_Showcase | 21:32 |
os252 | Here, I used it for sparse factor analysis, barely anymore code than the python predecessor https://github.com/PMBio/peer/blob/master/src/sparsefa.cpp | 21:33 |
blackburn | os252: was numpy not enough fast? | 21:36 |
os252 | depends. | 21:36 |
os252 | For anything you can code up without for loops there is little difference (obviously). | 21:37 |
blackburn | I mean for your project | 21:37 |
os252 | I mainly wanted it in C++ due to ease of interfacing with R. | 21:37 |
blackburn | ahh | 21:37 |
blackburn | yeah main reason here too I think :D | 21:37 |
os252 | yeah - you won't bee numpy by a mile. I't spretty optimized by now. | 21:37 |
os252 | In particular for models like GPs where the dominant cost is inverting or decomposing a huge matrix... | 21:38 |
n4nd0 | did you know about this apache mahout guys? http://mahout.apache.org/ | 21:38 |
@sonney2k | blackburn, well I don't know how far you are with your project but you can certainly hack up configure support for eigen3, identify the cblas* / lapack functions we use and then we can just wrap them? | 21:43 |
blackburn | sonney2k: why to wrap them? | 21:43 |
@sonney2k | blackburn, how can we drop lapack/blas if we don't do that? | 21:45 |
blackburn | sonney2k: eigen3 provides headers of cblas_* stuff | 21:45 |
@sonney2k | blackburn, without using blas/lapack right? | 21:46 |
@sonney2k | then try to replace it... | 21:46 |
blackburn | yes | 21:46 |
blackburn | ok I'll try | 21:46 |
@sonney2k | shogun-buildbot, status | 21:47 |
shogun-buildbot | cmdline_static: idle, last build 19h45m01s ago: failed test_1 | 21:47 |
shogun-buildbot | csharp_modular: idle, last build 165h50m41s ago: build successful | 21:47 |
shogun-buildbot | java_modular: idle, last build 165h35m17s ago: failed test_1 | 21:47 |
shogun-buildbot | libshogun: idle, last build 2h24m51s ago: failed configure | 21:47 |
shogun-buildbot | lua_modular: idle, last build 165h43m09s ago: failed test_1 | 21:47 |
shogun-buildbot | nightly_all: idle, last build 44m18s ago: failed configure | 21:47 |
shogun-buildbot | nightly_default: idle, last build 44m04s ago: failed configure | 21:47 |
shogun-buildbot | nightly_none: idle, last build 44m33s ago: failed configure | 21:47 |
shogun-buildbot | octave_modular: idle, last build 165h58m50s ago: failed test_1 | 21:47 |
shogun-buildbot | octave_static: idle, last build 19h39m52s ago: failed test_1 | 21:47 |
shogun-buildbot | python_modular: idle, last build 165h27m32s ago: failed test_1 | 21:47 |
shogun-buildbot | python_static: idle, last build 8h38m57s ago: failed configure | 21:47 |
shogun-buildbot | r_modular: idle, last build 165h18m54s ago: build successful | 21:47 |
shogun-buildbot | r_static: idle, last build 19h49m08s ago: failed configure | 21:47 |
@sonney2k | bah | 21:47 |
@sonney2k | blackburn, if it works all good | 21:49 |
blackburn | sonney2k: then? | 21:52 |
-!- os252 [55b3f5d9@gateway/web/freenode/ip.85.179.245.217] has quit [Quit: Page closed] | 22:22 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has left #shogun [] | 22:29 | |
-!- nickon [~noneedtok@dD5774105.access.telenet.be] has quit [Quit: ( www.nnscript.com :: NoNameScript 4.22 :: www.esnation.com )] | 22:32 | |
-!- ckwidmer [8ca3fe9d@gateway/web/freenode/ip.140.163.254.157] has joined #shogun | 22:36 | |
-!- tejaswi [~tejaswidp@117.192.131.69] has joined #shogun | 22:51 | |
-!- tejaswi [~tejaswidp@117.192.131.69] has quit [Read error: Connection timed out] | 23:07 | |
--- Log closed Tue Jun 26 00:00:07 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!