--- Log opened Sat May 26 00:00:41 2012 | ||
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 265 seconds] | 00:13 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 00:29 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Client Quit] | 00:29 | |
shogun-buildbot | build #573 of csharp_modular is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/csharp_modular/builds/573 | 00:38 |
---|---|---|
-!- blackburn [~blackburn@31.28.59.65] has quit [Ping timeout: 245 seconds] | 02:35 | |
-!- blackburn [~blackburn@31.28.59.65] has joined #shogun | 02:39 | |
-!- blackburn [~blackburn@31.28.59.65] has quit [Ping timeout: 246 seconds] | 03:40 | |
-!- pluskid [~pluskid@111.120.68.25] has joined #shogun | 07:37 | |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has joined #shogun | 09:22 | |
CIA-113 | shogun: Jacob Walker master * r349d66c / (5 files in 2 dirs): Added Element-wise Product Kernel. This Kernel is based heavily on - http://git.io/1oRpRw | 09:25 |
CIA-113 | shogun: Soeren Sonnenburg master * rf29facf / (5 files in 2 dirs): Merge pull request #555 from puffin444/master - http://git.io/yvezww | 09:25 |
CIA-113 | shogun: Soeren Sonnenburg master * r156a436 / (6 files): use appropriate label types for ruby modular - http://git.io/APkF0w | 09:29 |
shogun-buildbot | build #796 of octave_static is complete: Failure [failed test_1] Build details are at http://www.shogun-toolbox.org/buildbot/builders/octave_static/builds/796 blamelist: walke434@msu.edu | 09:43 |
shogun-buildbot | build #797 of octave_static is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/octave_static/builds/797 | 10:09 |
-!- n4nd0 [~nando@s83-179-44-135.cust.tele2.se] has quit [Ping timeout: 246 seconds] | 10:37 | |
-!- n4nd0 [~nando@n155-p45.kthopen.kth.se] has joined #shogun | 11:04 | |
-!- pluskid [~pluskid@111.120.68.25] has quit [Ping timeout: 246 seconds] | 11:04 | |
-!- pluskid [~pluskid@li400-235.members.linode.com] has joined #shogun | 11:04 | |
n4nd0 | hi pluskid, I have just read your mail | 11:06 |
n4nd0 | I really like the idea | 11:07 |
pluskid | n4nd0: :D | 11:08 |
pluskid | I remembered that I saw somebody trying to set up standard ml benchmarks that compares some state-of-the-art algorithms on some *real world* datasets, but no longer be able to find this site, maybe he didn't continue... | 11:09 |
n4nd0 | pluskid: interesting | 11:09 |
n4nd0 | pluskid: let's try not to forget about it and maybe we can think of it more carefully once our GSoC projects are more advanced | 11:10 |
n4nd0 | and well, let's see what the others think :) | 11:10 |
pluskid | yeah | 11:10 |
-!- pluskid [~pluskid@li400-235.members.linode.com] has quit [Ping timeout: 246 seconds] | 12:15 | |
-!- pluskid [~pluskid@111.120.68.25] has joined #shogun | 12:33 | |
-!- blackburn [~blackburn@31.28.59.65] has joined #shogun | 13:06 | |
-!- n4nd0 [~nando@n155-p45.kthopen.kth.se] has quit [Quit: leaving] | 13:54 | |
-!- pluskid [~pluskid@111.120.68.25] has quit [Ping timeout: 246 seconds] | 13:58 | |
-!- pluskid [~pluskid@li164-218.members.linode.com] has joined #shogun | 13:58 | |
-!- blackburn [~blackburn@31.28.59.65] has quit [Ping timeout: 250 seconds] | 15:41 | |
-!- puffin444 [62e3926e@gateway/web/freenode/ip.98.227.146.110] has joined #shogun | 16:00 | |
-!- oliver [55b43dc2@gateway/web/freenode/ip.85.180.61.194] has joined #shogun | 16:01 | |
oliver | puffin444, just ping me when you are here. | 16:02 |
puffin444 | oliver: I'm here | 16:02 |
oliver | great. | 16:02 |
oliver | Perhaps let's start with what we'd like to discuss. | 16:03 |
oliver | Anything specific on your side? | 16:03 |
oliver | I mainly a have a number of suggestions of order and how to avoid the (usual) pain in getting some of the GP details to work to save your time. | 16:04 |
oliver | But if you want to start with sth. else - let's do that first. | 16:04 |
puffin444 | I do not have any question right now | 16:05 |
oliver | Ok. What is on your list for next week? | 16:05 |
puffin444 | I believe I start working on the inference methods according to the plan | 16:06 |
-!- Francis_Chan1 [~Adium@58.194.224.108] has joined #shogun | 16:07 | |
oliver | great. That's a good time then to discuss a few things. | 16:07 |
oliver | You previously coded up the basic GP model. | 16:07 |
oliver | I'd just like to share a few points of things which could be cumbersome to get moving quickly. | 16:07 |
oliver | As a first goal I'd use the simlest possible GP model and basically just extend your existing GP class. | 16:08 |
oliver | You have a flexible kernel function in there already; the only thin you need is to add noise, a diagonal term. | 16:08 |
oliver | How did you plan to handle the noise in the standard Gaussian case? | 16:08 |
puffin444 | Do you mean the eta in y = f(x) + eta ? | 16:09 |
oliver | THere is some ambiguity. For standard GP stuff noise is effectively just a component of the covariance - which is one option. | 16:09 |
oliver | I mean: is it strictly part of the likelihood model? | 16:09 |
oliver | Perhaps the cleanest way to start out with | 16:09 |
oliver | Basically: what I'd suggest is to just get the GP class you have now running, including gradient optimization and simple Gaussian noise. | 16:10 |
oliver | Just one kernel (RBF would be suitable), Gaussian noise and gradient based optimization. | 16:11 |
oliver | When you have that start structuring it into different components, it's merely pulling the pieces apart. | 16:11 |
oliver | Have you looked into an interface for kernels to provide gradient information and math needed to calc the derivatives of marginal likelihoods w.r.t. kernel parameters etc. ? | 16:12 |
puffin444 | No I actually have not. Right now I only have the likelihood model return derivatives. Perhaps I need to add this interface | 16:14 |
oliver | Yes, you will need it one way or the other. | 16:14 |
oliver | Have you done stuff with gradient optimization before ? | 16:15 |
puffin444 | Only in simple cases. I wrote a gradient search for a neural network a while back. | 16:16 |
oliver | ok, great. | 16:16 |
oliver | What's generally good here. | 16:16 |
oliver | You have a complex function, which is the marginal likelihood | 16:16 |
oliver | There is a likelihood model and the kernel. | 16:16 |
oliver | THe kernel may itself be a product or sum of other kernels. | 16:16 |
oliver | In the end you want the gradient and it better be correct. | 16:16 |
oliver | It's best to check gradients for every component independently using a grad check - did you use dthat back then? | 16:17 |
puffin444 | No. | 16:17 |
oliver | It's part of most optimizers, but the approach is merely to compute the numerical gradient and compare to the analytical form. | 16:17 |
oliver | i.e. f'(x) = (f(x+h)-f(x)) / h | 16:18 |
oliver | you just calc that for small h | 16:18 |
oliver | if multi dimensional as here, you have one step in every direction. | 16:18 |
oliver | then you can check that this numericla implementation of f' matches your analytical solution. | 16:18 |
oliver | you can debug that for the kernel separately and all components and then put together. | 16:18 |
puffin444 | Okay. Because we are computing the gradient analytically, doesn't this mean that an specific way to compute the gradient must be added to every kernel? | 16:19 |
oliver | aboslutely | 16:19 |
oliver | every kernel needs to calculate the derivate of all kernel entries w.r.t. to parameters | 16:19 |
oliver | I'd start with RBF kernel first | 16:19 |
oliver | do you have matlab? | 16:19 |
puffin444 | I have octave | 16:20 |
puffin444 | and I can get access to matlab if necessary | 16:20 |
oliver | yeah, equally good. YOu can either steal from gpml as you wrote in your proposal or if you prefer python I have a similar thing in pythohn. | 16:20 |
oliver | Both implement all these gradients for various kernels, soo good to get inspiration of how to do things. | 16:20 |
oliver | FOr example, here are implementations of the rbf kernel derivaives: | 16:22 |
oliver | https://github.com/PMBio/pygp/blob/master/pygp/covar/se.py | 16:22 |
oliver | This contains a variant of grad check to get inspiration from: | 16:22 |
oliver | https://github.com/PMBio/pygp/blob/master/pygp/optimize/optimize_base.py | 16:22 |
oliver | And this is an example of a simple demo with optimization we should try to replicate in shogun (very similar ones in gpml) | 16:23 |
oliver | https://github.com/PMBio/pygp/blob/master/pygp/demo/demo_gpr.py | 16:23 |
puffin444 | okay | 16:23 |
oliver | I guess take a look and ping me if you need anything. It will take you a bit of time to get it to work at all (and getting the maths right), so keep things low complexity with few classes until you have a working thing and then build all the framework machinery on top. | 16:24 |
oliver | I am travelling quite a bit next week, so best is to arrange discussions via email. | 16:25 |
oliver | I'll be back to regular office mode on Firday | 16:25 |
puffin444 | So in this case I should first work with the class I have (GPRegression) and write what is necessary to learn the hyperparameters? | 16:26 |
puffin444 | After this is accomplished, write the infrastructure for this functionality to be generalized | 16:26 |
oliver | yes | 16:27 |
oliver | exactly. Play around with a simple test bed where you have everything under control and can move quickly. | 16:27 |
oliver | And then complicate it. | 16:27 |
puffin444 | What do you think is a reasonable timeframe for this? According to may project plan all the infrastructure needed for hyperparameter learning | 16:27 |
puffin444 | (no approximations or non-gaussian likelihoods) is written by half time | 16:28 |
oliver | I think if you strip down all the complexity. | 16:28 |
oliver | Take just one kernel, one likelihood model and the GP base class that coud be 1-2 weeks. | 16:29 |
oliver | There is a lot of code (python/matlab) to get inspiration from. | 16:29 |
puffin444 | I was thinking the same | 16:29 |
oliver | Probably it's more on the order of 2 weeks. | 16:29 |
oliver | Then you have always a version to compare to and can check that the framework etc. is correct and produces the correct results. | 16:30 |
puffin444 | In 1-2 weeks, get the simple case running correctly, and then the next two weeks generalize this into the infrastructure we discussed earlier | 16:30 |
oliver | yes, agree. | 16:30 |
oliver | And use gradcheck for everything ;-) | 16:31 |
oliver | I never coded up a single gradient that was correct first shot. | 16:31 |
puffin444 | How should I go about merging this into the main branch? Should I just make changes to my fork and only have a pull request at the very end? | 16:33 |
oliver | I think it's good to get all the feedback early on, otherwise it will be painful in the end. | 16:33 |
oliver | If it's clearly code that is temporary keep it local, otherwise merging is good. | 16:33 |
puffin444 | Okay. | 16:34 |
oliver | Even if some of the GP base class will change it ensures that the code is in line with shogun guide lines right from the start. | 16:34 |
oliver | Anything else? | 16:36 |
puffin444 | So I will make pull request incrementally | 16:36 |
puffin444 | Not at this point | 16:36 |
oliver | great | 16:36 |
puffin444 | Thanks for taking time for this meeting | 16:36 |
oliver | Let's keep in touch. Email will be best next week. | 16:37 |
oliver | No prob, thanks for getting up early. I am just leaving tonight and thought it would be good to catchup before then. | 16:37 |
oliver | Talk soon. | 16:38 |
puffin444 | I will probably be emailing you over next week. | 16:38 |
oliver | excellent | 16:38 |
puffin444 | Do you have any other questions for me? | 16:39 |
oliver | Not right now. | 16:39 |
oliver | Let's use email till Friday. Take care and happy coding. | 16:40 |
puffin444 | Okay. You too. | 16:40 |
-!- oliver [55b43dc2@gateway/web/freenode/ip.85.180.61.194] has quit [Quit: Page closed] | 16:40 | |
-!- puffin444 [62e3926e@gateway/web/freenode/ip.98.227.146.110] has left #shogun [] | 16:42 | |
-!- Francis_Chan1 [~Adium@58.194.224.108] has left #shogun [] | 16:50 | |
-!- pluskid [~pluskid@li164-218.members.linode.com] has quit [Quit: Leaving] | 18:57 | |
-!- gsomix [~gsomix@109.169.139.36] has joined #shogun | 19:01 | |
gsomix | hi all | 19:01 |
-!- blackburn [~blackburn@31.28.59.65] has joined #shogun | 19:40 | |
-!- gsomix [~gsomix@109.169.139.36] has quit [Ping timeout: 244 seconds] | 19:46 | |
-!- blackburn [~blackburn@31.28.59.65] has quit [Quit: Leaving.] | 19:46 | |
-!- blackburn [~blackburn@31.28.59.65] has joined #shogun | 19:46 | |
-!- gsomix [~gsomix@109.169.151.2] has joined #shogun | 19:59 | |
@sonney2k | gsomix, hi there ... so how is the director kernel stuff progressing? | 20:04 |
gsomix | sonney2k, moin, I'm working now. | 20:05 |
gsomix | just checked directors with templates, trying integrate it to shogun | 20:06 |
CIA-113 | shogun: Soeren Sonnenburg master * r216b658 / examples/undocumented/ruby_modular/classifier_libsvm_minimal_modular.rb : fix ruby example - http://git.io/WvBveA | 20:09 |
gsomix | sonney2k, and about exams. I have last at 4 June (there are some exams after, but it's easy). | 20:13 |
gsomix | that's all | 20:13 |
blackburn | that's all folks :D | 20:16 |
@sonney2k | gsomix, ok... but what about the simple test I suggested? | 20:18 |
@sonney2k | gsomix, enabling directors for some *simple* class in shogun that derives from SGObject? | 20:18 |
@sonney2k | gsomix, I mean even just an example like the one you had | 20:20 |
@sonney2k | you could just take this class and derive it from sgobject | 20:20 |
@sonney2k | then test if it still works within all the shogun stuff | 20:20 |
* sonney2k awaits the buildbot to complete with 100% success now | 20:21 | |
gsomix | sonney2k, =___= please wait, I need 15-20 mins. | 20:22 |
shogun-buildbot | build #799 of octave_static is complete: Failure [failed test_1] Build details are at http://www.shogun-toolbox.org/buildbot/builders/octave_static/builds/799 blamelist: sonne@debian.org | 20:33 |
gsomix | sonney2k, directored DynamicArray works fine. | 20:44 |
gsomix | but there is just simple overloaded setter. | 20:45 |
gsomix | but with templates | 20:45 |
gsomix | sonney2k, should I commit it? | 20:47 |
gsomix | sonney2k, http://pastebin.com/yUpN06YJ | 20:47 |
gsomix | simple test | 20:48 |
* gsomix afk | 20:50 | |
@sonney2k | gsomix, thats very cool then :)) | 21:19 |
@sonney2k | then I really wonder why my directorkernel didn't work... | 21:19 |
@sonney2k | gsomix, maybe because it was a protected method! | 21:21 |
blackburn | :D | 21:21 |
blackburn | sonney2k: did you try to redefine compute? | 21:21 |
@sonney2k | How could I miss that compute() of CKernel is protected | 21:22 |
* sonney2k smacks head | 21:22 | |
blackburn | :D | 21:22 |
blackburn | :D | 21:22 |
* sonney2k once | 21:22 | |
* sonney2k twice | 21:22 | |
* sonney2k bumps head on the table | 21:22 | |
* sonney2k ouuuuch | 21:22 | |
blackburn | lolotron | 21:23 |
gsomix | sonney2k, yep. I think problem in protected methods. | 21:29 |
gsomix | sonney2k, what to do next? | 21:30 |
CIA-113 | shogun: Soeren Sonnenburg master * r817fb76 / src/shogun/kernel/DirectorKernel.h : create *public* kernel_function method to be overidden in directorkernel - http://git.io/dTgxqw | 21:33 |
@sonney2k | gsomix, please have a look at this | 21:33 |
@sonney2k | in principle it should now be possible to override kernel_function when you compile shogun with --enable-swig-directors | 21:34 |
@sonney2k | the exampe for that is in kernel_director_modular.py | 21:35 |
@sonney2k | it needs some adjustments - maybe you create an example your own | 21:35 |
gsomix | sonney2k, ok | 21:43 |
gsomix | sonney2k, do you have some ideas for directored kernel? | 21:44 |
@sonney2k | gsomix, maybe you do the reverselinear kernel from the shogun tutorial in python | 21:44 |
@sonney2k | gsomix, in libshogun examples kernel_revlin.cpp | 21:45 |
blackburn | may be just compute linear | 21:45 |
blackburn | and compare matrices | 21:45 |
@sonney2k | blackburn, or that | 21:46 |
@sonney2k | true | 21:46 |
blackburn | gsomix: linear kernel is numpy dot | 21:46 |
@sonney2k | blackburn, one has to pay attention though | 21:46 |
@sonney2k | linear kernel might be normalized | 21:46 |
blackburn | to accuracy | 21:46 |
blackburn | yes | 21:46 |
blackburn | ah | 21:46 |
blackburn | normalization ture | 21:46 |
* sonney2k checks | 21:46 | |
blackburn | no need to check before | 21:47 |
blackburn | gsomix: just generate random data and compare linear kernel with director kernel | 21:48 |
blackburn | compute is | 21:48 |
@sonney2k | blackburn, no it uses identity kernel normalizer | 21:48 |
@sonney2k | so all good | 21:48 |
blackburn | just numpy.dot | 21:48 |
blackburn | clear? | 21:48 |
gsomix | yep | 21:48 |
@sonney2k | blackburn, it is not so easy | 21:48 |
blackburn | why? | 21:48 |
@sonney2k | there is sth we should discuss | 21:48 |
blackburn | what? | 21:48 |
@sonney2k | kernel.kernel(row,col) | 21:48 |
@sonney2k | operates on indices | 21:49 |
@sonney2k | so how should it work | 21:49 |
@sonney2k | should one do get_lhs()->get_feature_vector(row) | 21:49 |
blackburn | lhs = get_lhs().get_computed_feature_vector() | 21:49 |
@sonney2k | same for rhs | 21:49 |
@sonney2k | and then dot() | 21:49 |
blackburn | or lhs = X[:,i] | 21:49 |
blackburn | where X is a feature matrix | 21:49 |
@sonney2k | or should the features be set somewhere in the overloaded class? | 21:49 |
blackburn | depends | 21:50 |
blackburn | :D | 21:50 |
@sonney2k | blackburn, well... | 21:50 |
@sonney2k | if data is in the class | 21:50 |
blackburn | let me write compute function | 21:50 |
blackburn | :D | 21:50 |
@sonney2k | then we need a way to set num_lhs / num_rhs in directorkernel | 21:50 |
@sonney2k | otherwise kernel.kernel(i,j) will check if i<num_lhs | 21:50 |
@sonney2k | etc | 21:51 |
blackburn | I think no need to have extra features | 21:51 |
@sonney2k | and fail because num_lhs is 0 | 21:51 |
@sonney2k | gsomix, please add two functions to directorkernel | 21:51 |
blackburn | sonney2k: whY? | 21:51 |
@sonney2k | void set_num_vec_lhs(int32_t num_lhs) | 21:52 |
@sonney2k | and set_num_vec_rhs | 21:52 |
blackburn | why? | 21:52 |
blackburn | I think no need to add extra data to class | 21:52 |
@sonney2k | blackburn, you might have some complex data you want to compute kernels for | 21:52 |
blackburn | true | 21:52 |
@sonney2k | blackburn, think of graphs | 21:52 |
blackburn | agree | 21:52 |
@sonney2k | or parse trees | 21:52 |
@sonney2k | etc | 21:52 |
@sonney2k | blackburn, we have one issue here | 21:53 |
@sonney2k | if we use directors we cannot use threads | 21:53 |
blackburn | what? | 21:53 |
blackburn | okay fits perfectly with openmping | 21:54 |
@sonney2k | so we have to make sure that whenever some director* stuff is involved that num_threads = 1 | 21:54 |
@sonney2k | blackburn, no | 21:54 |
@sonney2k | same problem | 21:54 |
@sonney2k | python has the GIL | 21:54 |
blackburn | we have threads only in kernel matrix | 21:54 |
blackburn | let me check | 21:54 |
@sonney2k | so calling the (from python) overloaded function multiple times in parallel from threads is not possible | 21:54 |
blackburn | okay just some virtual function for # of threads | 21:55 |
@sonney2k | gsomix, so please when you do the test do kernel.parallel.set_num_threads(1) | 21:55 |
@sonney2k | gsomix, then kernel.get_kernel_matrix() | 21:55 |
blackburn | ARGH | 21:55 |
blackburn | I have no idea when to continue with openmp | 21:56 |
blackburn | I need someone to help me to test it | 21:56 |
@sonney2k | blackburn, openmp is not the issue here | 21:56 |
@sonney2k | problem will be there with openmp nevertheless | 21:56 |
blackburn | yes yes I understand | 21:56 |
blackburn | just recalled | 21:56 |
@sonney2k | gsomix, so once you have implemented the linear director kernel as example | 21:57 |
@sonney2k | just compute it's kernel matrix | 21:57 |
@sonney2k | and compare how fast computing the matrix is (say for a 1000x1000) matrix | 21:57 |
@sonney2k | compared to using LinearKernel() from shogun | 21:57 |
blackburn | fast heh | 21:57 |
@sonney2k | I would expect about 10-100 times slower... | 21:57 |
blackburn | do you care? | 21:58 |
blackburn | different thing | 21:58 |
gsomix | sonney2k, ok | 21:59 |
@sonney2k | I care yes. That will tell us the overhead and to what extend this is at all useful | 21:59 |
@sonney2k | if it is 1000 times slower it is not worth the effort | 22:00 |
blackburn | it is worth anyway I believe | 22:00 |
@sonney2k | gsomix, doing all that should take just 1-2 hours... | 22:00 |
blackburn | too cool to leave | 22:01 |
@sonney2k | cool indeed | 22:01 |
shogun-buildbot | build #161 of nightly_default is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/nightly_default/builds/161 | 22:03 |
gsomix | sonney2k, ok, time to work. | 22:04 |
blackburn | hmm lets try to kill some doc warinngs | 22:04 |
gsomix | btw, today I worked as builder. http://piccy.info/view3/3062330/d0afe3a699bb4a6ee6a21cd667f0f501/ | 22:04 |
blackburn | hahah | 22:04 |
gsomix | I think I should add some text about shogun and gsoc at this photo... | 22:04 |
gsomix | hehe | 22:04 |
gsomix | <\offtop> | 22:05 |
@sonney2k | hardcore talent abuse! | 22:06 |
blackburn | sonney2k: typical russian house :D | 22:07 |
blackburn | sonney2k: that's what I think about your stl hate: http://dl.dropbox.com/u/10139213/share/IMG_0031d.JPG | 22:12 |
gsomix | blackburn, pink >__< | 22:14 |
blackburn | yes yes my gf's room is pink | 22:14 |
gsomix | blackburn, http://dl.dropbox.com/u/19029407/IMG_0116.JPG | 22:14 |
blackburn | :D | 22:14 |
blackburn | lol | 22:15 |
gsomix | I don't have photos with me and blackburn :( | 22:18 |
shogun-buildbot | build #800 of octave_static is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/octave_static/builds/800 | 22:24 |
CIA-113 | shogun: Sergey Lisitsyn master * r676b942 / (3 files in 3 dirs): Removed a few doc warnings - http://git.io/anO22w | 22:25 |
shogun-buildbot | build #558 of ruby_modular is complete: Success [build successful] Build details are at http://www.shogun-toolbox.org/buildbot/builders/ruby_modular/builds/558 | 23:14 |
gsomix | sonney2k, have strange error | 23:27 |
gsomix | DirectorLinear | 23:27 |
gsomix | Traceback (most recent call last): | 23:27 |
gsomix | File "kernel_director_linear_modular.py", line 43, in <module> | 23:27 |
gsomix | kernel_director_linear_modular(*parameter_list[0]) | 23:27 |
gsomix | File "kernel_director_linear_modular.py", line 31, in kernel_director_linear_modular | 23:27 |
gsomix | dkernel.init(feats_train, feats_train) | 23:27 |
gsomix | RuntimeError: maximum recursion depth exceeded while calling a Python object | 23:27 |
gsomix | and init... | 23:28 |
gsomix | virtual bool init(CFeatures* l, CFeatures* r) | 23:28 |
gsomix | { | 23:28 |
gsomix | return CKernel::init(l, r); | 23:28 |
gsomix | } | 23:28 |
gsomix | in DirectorKernel | 23:28 |
blackburn | what is code? | 23:28 |
gsomix | blackburn, http://pastebin.com/PXvUSyTh | 23:29 |
gsomix | and I know, that my kernel_function is wrong | 23:30 |
gsomix | hehe | 23:30 |
gsomix | tired, need to sleep | 23:30 |
blackburn | hmm try to trace calls | 23:30 |
blackburn | what is calling by recursive? | 23:30 |
gsomix | init | 23:31 |
blackburn | init calls init? | 23:31 |
gsomix | blackburn, yep in Kernel | 23:32 |
gsomix | good night, guys =___= | 23:33 |
blackburn | nite | 23:33 |
-!- gsomix [~gsomix@109.169.151.2] has quit [Ping timeout: 244 seconds] | 23:39 | |
@sonney2k | hmmhh | 23:53 |
@sonney2k | that is the error I was getting | 23:53 |
blackburn | yes I do remember | 23:55 |
--- Log closed Sun May 27 00:00:41 2012 |
Generated by irclog2html.py 2.10.0 by Marius Gedminas - find it at mg.pov.lt!