-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Python benchmarks #9
Comments
Note that |
I have fixed Python benchmarks: https://github.com/opencog/benchmark/compare/master...vsbogd:fix-python-benchmarks?w=1 But they are extremely slow even comparing with guile. I am going to check what is the reason. And after that decide whether it makes sense improving this code or some another approach required for testing python bindings. |
Last time the python benchmarks ran, they were as fast or faster than guile. |
I have tested it on my workstation and it looks really faster and faster than guile. Probably the reason of confusion is incorrect python setup on my laptop. Raised PR #12 |
?? The guile numbers I measure are 2x or 3x faster than the python numbers you just reported in the other pull request. And I'm pretty sure your CPU is faster than mine. |
The guile and the python numbers will differ in the following ways:
The upshot is that guile runs at about half the speed of the native C++ code, while python runs about eight times slower. |
Hmmm.. Except my first statement is not true. According to my diary notes, python is 2x slower to enter/leave than guile. In 15 March 2015 entry, I was getting an enter/leave rate of 18K/sec for guile, and 8K/sec for python. |
Also: the 15 march 2015 entry was reporting 48K/sec addNode for python-interpreted. (I was unable to make python-memoized work at that time. Its possible that python does not memoize, I'm not sure). The same entry was showing 120K/sec for guile. So this was showing a better than 2x advantage over python. Flip-side: some of the numbers being reported seemed crazy; I'm not totally convinced that the benchmark is measuring things correctly. Also: python is single-threaded (they don't want to use locks, because that hurts their performance) but guile is fully threaded, and I suspect some stuff is running in other guile threads, and that the benchmark does not wait for those threads to finish, before reporting a time. Maybe. I'm not at all sure that guile is using other threads; I just don't understand why the reported performance numbers are kind-of crazy-ish. |
Sorry, I guess I am giving conflicting remarks. Earlier, I posted "Last time the python benchmarks ran, they were as fast or faster than guile." Now I'm saying "python is slower". There are multiple issues involved;
|
When I tried to analyze the reason of python slowness on my laptop I have looked at PythonEval code and supposed that it may be slow because PythonEval calls the interpreter for each statement separately. When loop is executing in Python interpreter it should be faster. So your homebrew python benchmarks can be faster than atomspace_bm one by this reason. Then it brings a question should PythonEval execution be included into benchmark or not? I though that PythonEval was included into atomspace_bm by intention as PythonEval is used to execute GroundedPredicateNodes and its performance affects GPN performance. But after reading comments above I think that it may make sense to measure Python bindings performance and PythonEval (GPN) performance separately. Use pure Python benchmarks to measure former one and use C++ atomspace_bm to measure latter one. I have played randomly with Guile and Python benchmarks and they shows different performance (relative to each other) with different parameters. I think I should spend more time on this to make some conclusions. |
Yes.
Yes. However, .. and this is important: essentially no one will ever run "pure python" -- that's because we do not have any "pure python" code, at all. So, although that would measure the speed of the bindings, and can be used as a tuning guide, it does not measure anything used "in real life". There are currently just two usage scenarios:
I doubt that the second usage is done very much, and so it's performance is not very interesting. There's a third quasi-usage that isn't actually a usage: connecting up ROS (which is pure python) to opencog. This is done by writing a python snipped to open a socket to the cogserver, and then sending scheme strings on that socket. (One could also send python strings on that socket, but its never done). |
There is something that we do not do, but maybe we should: create a ROS node, pure-python, start up opencog inside of it (start the cogserver inside of it too), and then use the "pure-python" API to opencog to stuff atoms into the atomspace. This could be a lot faster, a lot more efficient, than sending ascii strings over sockets. You may want to consult with @jdddog and @amebel about the details of implementing this. There are several things that would need to be fixed, along the way:
|
opencog/ghost_bridge#5 describes the above ROS-bridge idea. However, as I wrote it I realized that it's not actually all that interesting, since python-ROS never needs to directly poke atoms into the atomspace, and so there's not much point to it. |
Yes, I as far as I see GHOST is written using Scheme. To make as thin as possible ROS/GHOST bridge one should either write ROS Node on Scheme (but ROS doesn't have Scheme API) or write GHOST on Python or use C++ to write both. |
Merge opencog -> singnet
Python benchmarks are broken. At the moment all of benchmarks are guarded by #if HAVE_CYTHONX preprocessor condition:
benchmark/atomspace/atomspace/AtomSpaceBenchmark.cc
Line 467 in 9974909
The text was updated successfully, but these errors were encountered: