-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do a better job of communicating the problems with using nurseries/cancel scopes inside generators #638
Comments
Yeah, this is tricky. So one problem is that we aren't communicating the issues here well (partly because I've struggled to understand them myself!). My current understanding is that unless you're using We might be able to handle this better in 3.8, maybe with some version of PEP 568; in fact while writing this I had an idea which I've added to #495 :-). But right now there's not a lot we can do except getting better at educating users. We should definitely have a clear description of this issue and its consequences in the docs. Also, you're right about the error message here: we should be able to detect this case (at least some of the time), and give a better error message. I think we can diagnose it reliably by detecting cases where we try to leave a cancel scope and it's not at the top of the cancel stack, or when a task exits with a cancel stack that's bigger than the one it started with. |
So since I am once again struggling with a generator-related issue (I'm now tending towards "avoid them"), let me ask for clarification. Until now I read this as meaning that as long as I am using import trio
async def gen_with_task():
async with trio.open_nursery() as n:
n.start_soon(trio.sleep_forever)
while True:
await trio.sleep(1)
yield 1
async def main():
iterable = gen_with_task() # THIS IS NEW
async with aclosing(iterable): # THIS IS NEW
async for f in iterable:
raise ValueError()
trio.run(main) Then things are appear to be going well. But you write:
Which would imply that using
Does that mean I should not be doing what I am doing above, even with Or is there some way to make it work using @contextmanager/@asynccontextmanager? |
So the case I find my self struggling with is this: import trio
from async_generator import aclosing
async def fail_soon():
raise ValueError()
async def subscribe():
async with trio.open_nursery() as nursery:
nursery.start_soon(fail_soon)
while True:
yield 1
async def main():
events = subscribe()
async with aclosing(events):
async for event in events:
# XXX
await trio.sleep(1)
trio.run(main) This causes a My problem is the As far as I can tell, this is what happens:
Somehow this leads to a |
Fix: ignore that error:
|
Writing a context manager which does the same thing is left as an exercise to the reader. ;-) |
Correct :-( (To be totally precise: a generator function can use a nursery – e.g. it's OK to call @smurfix is right about what's happening with the There is good news though: you can make this API work basically the same as before, just, instead of using @asynccontextmanager
async def subscribe():
events = await set_up_subscription()
async with trio.open_nursery() as nursery:
nursery.start_soon(background_task)
async def events_gen_function():
while True:
yield get_something_from_background_task_or_whatever()
async with aclosing(events_gen_function()) as events:
yield events
# Usage:
async with subscribe() as events:
async for event in events:
... So the pattern is: use an |
I guess we could have a This idea violates all kinds of conventional rules and taste, but maybe it'd be pragmatic and useful anyway? idk |
Here’s one more example where one might want this (adapted from my recent code): async def collect(*awaitables): # essentially the same thing as asyncio.as_completed()
results = Queue(0)
async def perform(a):
await results.push(await a)
async with open_nursery() as nursery:
for a in awaitables: nursery.start_soon(a)
while nursery.child_tasks: yield await results.get() I also had a similar function, Side note: I understand the desire for minimal APIs, but trio needs more combinators like these. It has lots of useful stuff, but I want to be able to talk about it in my programs. (Similarly, |
For the purposes of this comment, a “thread” will mean anything with a separate (Python) instruction pointer (whether or not it has a stack, is associated with a trio task or an OS thread, etc.). @njsmith says:
Semantically speaking, what cancel scopes and (to some extent) nurseries implement is dynamic scoping. For example, I see the cancel scopes section of @njsmith’s essay as essentially advocating passing the cancel token in a dynamically scoped variable (Common Lisp “special variable”, Clojure “var”, etc.) instead of an explicit argument. What’s relevant about dynamic scope here is that when a thread is created, it should (generally) inherit it from its parent. AFAIU, this is indeed what trio does when a child task is created (at least wrt cancel scopes). However, Python provides one more way (!) to create a thread: create a generator (whether async or not). It seems that trio doesn’t (or is unable to) handle this at all, and so it all comes crashing down. |
If I understand contexts, context vars and PEP 568 correctly (... :-/ ), a correct dynamic variable (suitable for e.g. a cancel token), given PEP 568, is simply something like the following ... from contextlib import contextmanager
from contextvars import ContextVar
class Dynamic:
def __init__(self, name):
self._value = ContextVar(name)
self._stack = []
@property
def value(self):
return self._value.get()
@contextmanager
def __call__(self, value):
self._stack.append(self._value.set(value))
try:
yield
finally:
self._value.reset(self._stack.pop())
if __name__ == '__main__':
v = Dynamic('v')
with v('spam'):
print(v.value) # spam
with v('eggs and'):
print(v.value) # eggs and
print(v.value) # spam ... which really should be in the standard library. |
Yeah, what cancel scopes need is something like dynamic scoping (or actually slightly more powerful – scopes can be nested, so we need to be able to fetch all the values on the stack, not just the latest value). That was a major motivation for writing PEP 568. But, PEP 568 hasn't been accepted or implemented, so we can't actually use it currently. And the issues with nurseries are actually worse: if you can yield out of a nursery block, you can actually recreate the equivalent of a At the python sprint last week, Yury (main asyncio maintainer) and i talked about this quite a bit, since asyncio has the same problem now that it's adding nurseries. The best we were able to come up with was to add a way for a context manager to forbid yields inside it. Though neither of us are overjoyed about it. Regarding your async with collect(...) as results:
async for result in results:
... Here the
Agreed. I want to get #586 finished first (so the |
By the way, it’s unclear to me how either PEP 568 or forbidding |
@alexshpilkin yeah, that's one of the major sources of complexity here. The idea is that we would keep a stack of (There is a bit of discussion of this in PEP 568, if you search for |
See https://github.com/python-trio/trimeter That's not really a drop-in replacement for async generators, though, it's a higher-level thing. Here's a sketch for a possible alternative to async generators, that tries to keep a lot of the ergonomics that makes them attractive in the first place, while allowing background tasks and such: https://gist.github.com/njsmith/4db568255a276d4c7cf8a9a6b4295348 @miracle2k's original example using this: # Add decorator, and send_channel argument
@producer
async def gen_with_task(*, send_channel):
async with trio.open_nursery() as n:
n.start_soon(trio.sleep_forever)
while True:
await trio.sleep(1)
# Replace 'yield X' with 'await send_channel.send(X)'
await send_channel.send(1)
async def main():
# Now have to use 'async with' to get an async iterator (actually a full-fledge ReceiveChannel)
async with gen_with_task() as aiter:
async for f in aiter:
raise ValueError() |
I really like the ergonomics-per-implementation-complexity-unit of I also played around with supporting real async generators, and I think it's possible to do so robustly:
This is still technically suspending an async generator while it has nurseries/cancel scopes open, but the task that's iterating the async generator doesn't exit or enter any nurseries or cancel scopes while the async generator is suspended, and the exception forwarding effectively puts
|
Oo, neat idea. I have a tentative intuition that we do want something like the One thing I'm not sure of: suppose On the one hand, it's kind of surprising to have Another thing I'm not sure of: the version that passes in a channel can call any method on the channel, including e.g. |
I believe the code above does in fact do that -- is there something I'm missing? The rethrowing is necessary to ensure that if a cancel scope inside the generator causes the send to be cancelled, the Cancelled exception passes through that cancel scope as it unwinds. For what it's worth, I think the ergonomics of the generator solution might be further improved by suppressing I agree that the version that passes It should be feasible to write adapters that go in either direction, so I don't think it matters too much which one we decide should be "primitive". |
We would probably also want to cancel the nursery inside |
Oh whoops, no, I just read too quickly on my phone :-).
Ah yeah. That's a compelling argument against the other option, where we don't re-throw. Though it also makes me a little more dubious about the whole thing... especially since I'm trying to convince Yury that we should extend the interpreter to disallow
That's true, when people abandon a
Makes sense to me.
The |
Here I've implemented "suppressing BrokenResourceError explicitly on the call to send() in adapter". It seems to work. Please correct I've missed something. def producer(wrapped):
@asynccontextmanager
@functools.wraps(wrapped)
async def wrapper(*args, **kwargs):
send_channel, receive_channel = trio.open_memory_channel(0)
async with trio.open_nursery() as nursery:
async def adapter():
async with send_channel, aclosing(wrapped(*args, **kwargs)) as agen:
user_exit = False
while not user_exit:
try:
# Advance underlying async generator to next yield
value = await agen.__anext__()
except StopAsyncIteration:
break
while True:
try:
# Forward the yielded value into the send channel
try:
await send_channel.send(value)
except trio.BrokenResourceError:
user_exit = True
break
except BaseException:
# If send_channel.send() raised (e.g. Cancelled),
# throw the raised exception back into the generator,
# and get the next yielded value to forward.
value = await agen.athrow(*sys.exc_info())
nursery.start_soon(adapter, name=wrapped)
async with receive_channel:
yield receive_channel
return wrapper |
It doesn't appear to handle @producer
async def squares_in_range(low, high):
try:
for i in range(low, high):
with trio.move_on_after(0.5):
yield i ** 2
finally:
print("unwinding")
async def test_producer_cancelled():
async with squares_in_range(0, 50) as squares:
async for _ in squares:
await trio.sleep(1)
|
I've fixed the StopAsyncIteration issue based on a hint here: agronholm/asyncio_extras#2 at this point there several references to the async generator adapter approach in this issue-- there really should be a working, tested implementation I'll make a PR for trio-util and open it to review |
trio-util |
`trio`'s internals don't allow for async generator (and thus by consequence dynamic reset of async exit stacks containing `@acm`s) interleaving since doing so corrupts the cancel-scope stack. See details in: - python-trio/trio#638 - https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator We originally tried to address this using `@trio_util.trio_async_generator` in backend streaming code but for whatever reason stopped working recently (at least for me) and it's more or less implemented the same way as this patch but with more layers and an extra dep. I also don't want us to have to address this problem again if/when that lib isn't able to keep up to date with wtv `trio` is doing.. So instead this is a complete rewrite of the conc design of our auto-reconnect ws API to move all reset logic and msg relay into a bg task which is respawned on reset-requiring events: user spec-ed msg recv latency, network errors, roaming events. Deatz: - drop all usage of `AsyncExitStack` and no longer require client code to (hackily) call `NoBsWs._connect()` on msg latency conditions, intead this is all done behind the scenes and the user can instead pass in a `msg_recv_timeout: float`. - massively simplify impl of `NoBsWs` and move all reset logic into a new `_reconnect_forever()` task. - offer use of `reset_after: int` a count value that determines how many `msg_recv_timeout` events are allowed to occur before reconnecting the entire ws from scratch again.
`trio`'s internals don't allow for async generator (and thus by consequence dynamic reset of async exit stacks containing `@acm`s) interleaving since doing so corrupts the cancel-scope stack. See details in: - python-trio/trio#638 - https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator - `trio._core._run.MISNESTING_ADVICE` We originally tried to address this using `@trio_util.trio_async_generator` in backend streaming code but for whatever reason stopped working recently (at least for me) and it's more or less implemented the same way as this patch but with more layers and an extra dep. I also don't want us to have to address this problem again if/when that lib isn't able to keep up to date with wtv `trio` is doing.. So instead this is a complete rewrite of the conc design of our auto-reconnect ws API to move all reset logic and msg relay into a bg task which is respawned on reset-requiring events: user spec-ed msg recv latency, network errors, roaming events. Deatz: - drop all usage of `AsyncExitStack` and no longer require client code to (hackily) call `NoBsWs._connect()` on msg latency conditions, intead this is all done behind the scenes and the user can instead pass in a `msg_recv_timeout: float`. - massively simplify impl of `NoBsWs` and move all reset logic into a new `_reconnect_forever()` task. - offer use of `reset_after: int` a count value that determines how many `msg_recv_timeout` events are allowed to occur before reconnecting the entire ws from scratch again.
`trio`'s internals don't allow for async generator (and thus by consequence dynamic reset of async exit stacks containing `@acm`s) interleaving since doing so corrupts the cancel-scope stack. See details in: - python-trio/trio#638 - https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator - `trio._core._run.MISNESTING_ADVICE` We originally tried to address this using `@trio_util.trio_async_generator` in backend streaming code but for whatever reason stopped working recently (at least for me) and it's more or less implemented the same way as this patch but with more layers and an extra dep. I also don't want us to have to address this problem again if/when that lib isn't able to keep up to date with wtv `trio` is doing.. So instead this is a complete rewrite of the conc design of our auto-reconnect ws API to move all reset logic and msg relay into a bg task which is respawned on reset-requiring events: user spec-ed msg recv latency, network errors, roaming events. Deatz: - drop all usage of `AsyncExitStack` and no longer require client code to (hackily) call `NoBsWs._connect()` on msg latency conditions, intead this is all done behind the scenes and the user can instead pass in a `msg_recv_timeout: float`. - massively simplify impl of `NoBsWs` and move all reset logic into a new `_reconnect_forever()` task. - offer use of `reset_after: int` a count value that determines how many `msg_recv_timeout` events are allowed to occur before reconnecting the entire ws from scratch again.
`trio`'s internals don't allow for async generator (and thus by consequence dynamic reset of async exit stacks containing `@acm`s) interleaving since doing so corrupts the cancel-scope stack. See details in: - python-trio/trio#638 - https://trio-util.readthedocs.io/en/latest/#trio_util.trio_async_generator - `trio._core._run.MISNESTING_ADVICE` We originally tried to address this using `@trio_util.trio_async_generator` in backend streaming code but for whatever reason stopped working recently (at least for me) and it's more or less implemented the same way as this patch but with more layers and an extra dep. I also don't want us to have to address this problem again if/when that lib isn't able to keep up to date with wtv `trio` is doing.. So instead this is a complete rewrite of the conc design of our auto-reconnect ws API to move all reset logic and msg relay into a bg task which is respawned on reset-requiring events: user spec-ed msg recv latency, network errors, roaming events. Deatz: - drop all usage of `AsyncExitStack` and no longer require client code to (hackily) call `NoBsWs._connect()` on msg latency conditions, intead this is all done behind the scenes and the user can instead pass in a `msg_recv_timeout: float`. - massively simplify impl of `NoBsWs` and move all reset logic into a new `_reconnect_forever()` task. - offer use of `reset_after: int` a count value that determines how many `msg_recv_timeout` events are allowed to occur before reconnecting the entire ws from scratch again.
Robustly wrapping up an async generator into a background task + channel has turned out to be hilariously difficult: you can see the long list of attempts above... and I recently found a latent deadlock in my best synthesis. I therefore suggest that we should ship a well-tested wrapper function as part of Trio itself, which will also let us offer very clear advice on how to safely use async generators ("only if wrapped in known decorators", cf this lint rule). Here's a branch with my implementation. Changes relative to those above:
@background_with_channel()
async def agen():
yield 1
await trio.sleep_forever() # simulate deadlock
yield 2
@trio.run
async def main():
async with agen() as recv_chan:
async for x in recv_chan:
print(x)
break # exit, cleanup should be quick
# comment `nursery.cancel_scope.cancel()` and it hangs |
This would make sense to me -- though as a counterpoint, I'm not sure this is that popular a use case. For instance, a quick GH code search for
I can't tell whether this handles |
Even if it's not that popular, the consequences of using async gens wrong are pretty dire, it might be good to just promote always using this wrapper perhaps to just eliminate the issue? |
Yep. I also think it'd be good to get this helper into Trio and likely AnyIO; that would also give me a clear recommendation I can make in PEP-789 and |
I am aware of the issues with cleaning up async iterators, PEP 533 and the need for aclosing. However, if a closing is not used, which I overlook frequently, things can really blow up in your face:
gives me:
Note that in addition to the internal trio exceptions, the original cause is not shown at all. This happens when the async generator had a task running in a nursery.
Maybe there is no way to handle this better - I can't tell, this is above my paygrade! But in case there is a way to show a clearer error message, I think it might be worth it.
The text was updated successfully, but these errors were encountered: