Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug fix for on exiting a remote context-manager with exception #228

Merged
merged 1 commit into from
Nov 11, 2017

Conversation

koreno
Copy link
Contributor

@koreno koreno commented Aug 20, 2017

When exiting a remote context-manager due to an exception, rpyc will attempt to pass the exc, typ, tb to the remote __exit__ of the context-manager. Python then rightfully complains that the tb isn't a proper traceback object:

E           _get_exception_class.<locals>.Derived: throw() third argument must be a traceback object
E           
E           ========= Remote Traceback (1) =========
E           Traceback (most recent call last):
E             File "/home/ofer/Sources/wekapp/deps/rpyc/rpyc/core/protocol.py", line 312, in _dispatch_request
E               res = self._HANDLERS[handler](self, *args)
E             File "/home/ofer/Sources/wekapp/deps/rpyc/rpyc/core/protocol.py", line 562, in _handle_callattr
E               return self._handle_getattr(oid, name)(*args, **dict(kwargs))
E             File "/usr/lib/python3.5/contextlib.py", line 77, in __exit__
E               self.gen.throw(type, value, traceback)
E           TypeError: throw() third argument must be a traceback object

Copy link
Contributor

@coldfix coldfix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contributions! I have a few minor comments.

@@ -117,7 +117,10 @@ def _import_codetup(codetup):

def import_function(functup):
name, modname, defaults, codetup = functup
mod = __import__(modname, None, None, "*")
try:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change seems unrelated to the PR topic. Also, I think it makes generally no sense to use __main__ as fallback here (or does it?) and letting the import fail will result in better error messages. I will probably cherry-pick around this commit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commit seem to have tagged along unintentionally. you may indeed drop it

@@ -560,6 +560,15 @@ def _handle_setattr(self, oid, name, value):
return self._access_attr(oid, name, (value,), "_rpyc_setattr", "allow_setattr", setattr)
def _handle_callattr(self, oid, name, args, kwargs):
return self._handle_getattr(oid, name)(*args, **dict(kwargs))
def _handle_ctxexit(self, oid, exc):
if exc:
Copy link
Contributor

@coldfix coldfix Aug 21, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason for the try...except here? To support context managers that require real traceback object?

The traceback generated here is not the one corresponding to the original exception, so I'm almost thinking it might be better to let context managers fail that require a traceback. I'd tend to remove the whole if...else block and simply pass self._handle_getattr(oid, "__exit__")(exc, type(exc), None) instead. CMs that require tracebacks are probably more of an edge-case anyway. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it's to provide a valid traceback object. My only fear is CMs that are written naively, like:

def __exit__(self, *args):
    if args[0]:
        log_traceback(args[2])

In other words, expecting there to always be a traceback object if an exception object was passed.

@@ -6,14 +6,19 @@

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be instance variables. Otherwise you can't call the exposed_context method twice (or you have to deal with resetting them after each call but that's worse..).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't want to change it too much, so I've added a reset method

self.assertTrue(on_context_exit)
print( "got past on_context_exit" )

def test_context_exception(self):
with self.assertRaises(AssertionError):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is two things wrong here:

  • having multiple asserts within the with block means we don't know which one generated the exception
  • checking against AssertionError confuses the boundaries between test failures and desired behaviour

I think the test should look like this:

with self.conn.root.context(3):
    raise ValueError()
self.assertTrue(on_context_enter)
self.assertTrue(on_context_error)
self.assertTrue(on_context_exit)

(without the print, don't ask me why they are in the other test..)

and remove the raise line from the context manager.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, is this PR a fix for locally raising an exception within the context of a remote context manager, or a fix for remotely reraising it from within the __exit__ handler of the context manager?

If the first one is the case, this test should be simplified as I suggested.

@coldfix
Copy link
Contributor

coldfix commented Nov 8, 2017

Hi @koreno, I will merge your PR if you clarify the following:

(I had asked this before, but it was attached to a outdated diff, so I'm not sure you saw my query)


To clarify, is this PR a fix for locally raising an exception within the context of a remote context manager, or a fix for remotely reraising it from within the __exit__ handler of the context manager?

If the first one is the case, this test should be simplified as I suggested.


where the suggestion was:

There is two things wrong here:

  • having multiple asserts within the with block means we don't know which one generated the exception
  • checking against AssertionError confuses the boundaries between test failures and desired behaviour

I think the test should look like this:

with self.conn.root.context(3):
    raise ValueError()
self.assertTrue(on_context_enter)
self.assertTrue(on_context_error)
self.assertTrue(on_context_exit)

(without the print, don't ask me why they are in the other test..)

and remove the raise line from the context manager.

@koreno
Copy link
Contributor Author

koreno commented Nov 9, 2017

To clarify, is this PR a fix for locally raising an exception within the context of a remote context manager, or a fix for remotely reraising it from within the exit handler of the context manager?

Indeed!

The test I've added, in its final version, indeed matches what you've suggested.

@coldfix
Copy link
Contributor

coldfix commented Nov 9, 2017

The test I've added, in its final version, indeed matches what you've suggested.

It doesn't. There are several assertions within the with block, line 56-58. Maybe you forgot to push?

@koreno
Copy link
Contributor Author

koreno commented Nov 10, 2017

Hm. I see I misunderstood your comment.

having multiple asserts within the with block means we don't know which one generated the exception

Why is that? a python traceback clearly indicates from which line an exception got thrown.

checking against AssertionError confuses the boundaries between test failures and desired behaviour

The test assertRaises(MyException) (and raises a MyException), so I don't see what would be confusing.

@coldfix coldfix merged commit c6ee5fb into tomerfiliba-org:master Nov 11, 2017
@coldfix
Copy link
Contributor

coldfix commented Nov 11, 2017

Ok fine.

@coldfix
Copy link
Contributor

coldfix commented Nov 11, 2017

Thanks!

@koreno
Copy link
Contributor Author

koreno commented Nov 11, 2017

Thank you too!

coldfix added a commit that referenced this pull request Jun 11, 2018
This release brings a few minor backward incompatibilities, so be sure to read
on before upgrading. However, fear not: the ones that are most likely relevant
to you have a relatively simple migration path.

Backward Incompatibilities
^^^^^^^^^^^^^^^^^^^^^^^^^^

* ``classic.teleport_function`` now executes the function in the connection's
  namespace by default. To get the old behaviour, use
  ``teleport_function(conn, func, conn.modules[func.__module__].__dict__)``
  instead.

* Changed signature of ``Service.on_connect`` and ``on_disconnect``, adding
  the connection as argument.

* Changed signature of ``Service.__init__``, removing the connection argument

* no longer store connection as ``self._conn``. (allows services that serve
  multiple clients using the same service object, see `#198`_).

* ``SlaveService`` is now split into two asymetric classes: ``SlaveService``
  and ``MasterService``. The slave exposes functionality to the master but can
  not anymore access remote objects on the master (`#232`_, `#248`_).
  If you were previously using ``SlaveService``, you may experience problems
  when feeding the slave with netrefs to objects on the master. In this case, do
  any of the following:

  * use ``ClassicService`` (acts exactly like the old ``SlaveService``)
  * use ``SlaveService`` with a ``config`` that allows attribute access etc
  * use ``rpyc.utils.deliver`` to feed copies rather than netrefs to
    the slave

* ``RegistryServer.on_service_removed`` is once again called whenever a service
  instance is removed, making it symmetric to ``on_service_added`` (`#238`_)
  This reverts PR `#173`_ on issue `#172`_.

* Removed module ``rpyc.experimental.splitbrain``. It's too confusing and
  undocumented for me and I won't be developing it, so better remove it
  altogether. (It's still available in the ``splitbrain`` branch)

* Removed module ``rpyc.experimental.retunnel``. Seemingly unused anywhere, no
  documentation, no clue what this is about.

* ``bin/rpyc_classic.py`` will bind to ``127.0.0.1`` instead of ``0.0.0.0`` by
  default

* ``SlaveService`` no longer serves exposed attributes (i.e., it now uses
  ``allow_exposed_attrs=False``)

* Exposed attributes no longer hide plain attributes if one otherwise has the
  required permissions to access the plain attribute. (`#165`_)

.. _#165: #165
.. _#172: #172
.. _#173: #173
.. _#198: #198
.. _#232: #232
.. _#238: #238
.. _#248: #248

What else is new
^^^^^^^^^^^^^^^^

* teleported functions will now be defined by default in the globals dict

* Can now explicitly specify globals for teleported functions

* Can now use streams as context manager

* keep a hard reference to connection in netrefs, may fix some ``EOFError``
  issues, in particular on Jython related (`#237`_)

* handle synchronous and asynchronous requests uniformly

* fix deadlock with connections talking to each other multithreadedly (`#270`_)

* handle timeouts cumulatively

* fix possible performance bug in ``Win32PipeStream.poll`` (oversleeping)

* use readthedocs theme for documentation (`#269`_)

* actually time out sync requests (`#264`_)

* clarify documentation concerning exceptions in ``Connection.ping`` (`#265`_)

* fix ``__hash__`` for netrefs (`#267`_, `#268`_)

* rename ``async`` module to ``async_`` for py37 compatibility (`#253`_)

* fix ``deliver()`` from IronPython to CPython2 (`#251`_)

* fix brine string handling in py2 IronPython (`#251`_)

* add gevent_ Server. For now, this requires using ``gevent.monkey.patch_all()``
  before importing for rpyc. Client connections can already be made without
  further changes to rpyc, just using gevent's monkey patching. (`#146`_)

* add function ``rpyc.lib.spawn`` to spawn daemon threads

* fix several bugs in ``bin/rpycd.py`` that crashed this script on startup
  (`#231`_)

* fix problem with MongoDB, or more generally any remote objects that have a
  *catch-all* ``__getattr__`` (`#165`_)

* fix bug when copying remote numpy arrays (`#236`_)

* added ``rpyc.utils.helpers.classpartial`` to bind arguments to services (`#244`_)

* can now pass services optionally as instance or class (could only pass as
  class, `#244`_)

* The service is now charged with setting up the connection, doing so in
  ``Service._connect``. This allows using custom protocols by e.g. subclassing
  ``Connection``.  More discussions and related features in `#239`_-`#247`_.

* service can now easily override protocol handlers, by updating
  ``conn._HANDLERS`` in ``_connect`` or ``on_connect``. For example:
  ``conn._HANDLERS[HANDLE_GETATTR] = self._handle_getattr``.

* most protocol handlers (``Connection._handle_XXX``) now directly get the
  object rather than its ID as first argument. This makes overriding
  individual handlers feel much more high-level. And by the way it turns out
  that this fixes two long-standing issues (`#137`_, `#153`_)

* fix bug with proxying context managers (`#228`_)

* expose server classes from ``rpyc`` top level module

* fix logger issue on jython

.. _#137: #137
.. _#146: #146
.. _#153: #153
.. _#165: #165
.. _#228: #228
.. _#231: #231
.. _#236: #236
.. _#237: #237
.. _#239: #239
.. _#244: #244
.. _#247: #247
.. _#251: #251
.. _#253: #253
.. _#264: #264
.. _#265: #265
.. _#267: #267
.. _#268: #268
.. _#269: #269
.. _#270: #270

.. _gevent: http://www.gevent.org/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants