-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Channels, a replacement for Queues #586
Conversation
Codecov Report
@@ Coverage Diff @@
## master #586 +/- ##
===========================================
+ Coverage 83.97% 99.31% +15.34%
===========================================
Files 94 96 +2
Lines 13001 11464 -1537
Branches 783 820 +37
===========================================
+ Hits 10918 11386 +468
+ Misses 2061 58 -2003
+ Partials 22 20 -2
Continue to review full report at Codecov.
|
Two things I'm having doubts about:
|
I'm learning against supporting I'm also being tempted to make I'm also wondering if we should try making this API start out relatively minimal. I don't really want to add Still not sure what to do about the
And finally I am wondering about how to roll this out. The idea would be to eventually replace |
|
Do you mean the queue that
Yeah, mayyybe... this kind of consideration is why I'm more likely to keep I guess |
No. Just your garden variety asyncio receive-data callback when you're hooking into a transport+protocol. That send-back queue is a prime candidate for |
Some discussion of what the Interesting realization: the fundamental problem is that when there are multiple ways that a task can wake back up, it can be difficult for one path to know how to clean up the state used by the other paths. In this case, I don't think |
For discussion see: python-trio#586 (comment)
Okay, gave up on Also removed Added some initial tests, including a version of the fan-in example at the top of this thread. Weirdly enough, they seem be passing. Still need a bunch more tests, docs, statistics and repr. |
I just experimented with the channels implementation (from commit abca8bd) as a closeable message queue for python-trio/trio-websocket#11. My use case is pretty simple, and this API is perfect for it. (I'm afraid I don't really understand the |
I played around a bit with what it would take to implement my prototype channel interfaces across a process boundary, on top of a This is much simpler if we standardize on The places where the APIs don't quite line up are:
The first two don't seem like dealbreakers, or things we necessarily need to figure out right now. But that's what makes the last one tricky :-). We could make Or, if our guess is that inter-process channels will be important, then we could make it Any thoughts? As a user, would it annoy you to have to do |
Discovered while writing the docs that not having it is really confusing and hard to justify.
Done, maybeI think the code, tests, docs, and newsfragments are all ready here. Anyone want to give reviewing it a try? The diff is large, but even just reading through the new docs and docstrings would be helpful! To summarize:
|
From the code it seems clear that sending does a round-robin send to all the tasks, but the docs don't seem to specify this. Seems like it would be useful to include. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great; just some suggestions for documentation cleanup.
trio/_channel.py
Outdated
# underlying stream when all SendChannels are closed. | ||
|
||
# to think about later: | ||
# - max_buffer_size=0 default? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd support this - it seems to go hand-in-hand with the docs saying "if in doubt, use 0".
Thanks everyone for the detailed comments, you caught a ton of stuff I missed :-). I think they've all been addressed now. @Fuyukai I tried to clarify this in a few places, but not quite sure how to do it best... would you mind taking another look, and if you think it could be clearer let me know where you expected to see it? |
Do you want to move the follow-up tweaks into a new issue before I hit merge? |
Done (see #719), plus a few more tweaks I noticed. I'd still like to hear @Fuyukai (or anyone's :-)) thoughts on whether/how to make it clear that receivers alternate rather than each getting their own copy of incoming objects, but I suppose that can always be handled in a follow-up PR if necessary. |
I think that the new version is clear enough because you 1/ switched from fan-in/fan-out to producers/consumers and 2/ called this out explicitly in the docs. But yeah, I think further improvements can go in another PR. |
Okay, let's do this! ✨ |
Whoops, well spotted. I'm going to bed now though. Perhaps someone will fix
it will I'm asleep :-)
…On Fri, Oct 5, 2018, 03:22 Davide Rizzo ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In docs/source/reference-core.rst
<#586 (comment)>:
> +You can disable buffering entirely, by doing
+``open_memory_channel(0)``. In that case any task calls
+:meth:`~trio.abc.SendChannel.send` will wait until another task calls
+`~trio.abc.SendChannel.receive`, and vice versa. This is similar to
+how channels work in the `classic Communicating Sequential Processes
+model <https://en.wikipedia.org/wiki/Channel_(programming)>`__, and is
+a reasonable default if you aren't sure what size buffer to use.
+(That's why we used it in the examples above.)
+
+At the other extreme, you can make the buffer unbounded by using
+``open_memory_channel(math.inf)``. In this case,
+:meth:`~trio.abc.SendChannel.send` *always* returns immediately.
+Normally, this is a bad idea. To see why, consider a program where the
+producer runs more quickly than the consumer::
+
+.. literalinclude:: reference-core/channels-backpressure.py
Double colon above so literalinclude is not executed
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#586 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAlOaNiXjbCFOCCcAhtqTDgM65pcgcPfks5uhzL7gaJpZM4VmCTL>
.
|
(not ready to merge, but posting it now for discussion)
This is a possible answer to #497. Essentially, it rethinks how Queue
might work, based on Trio idioms instead of whatever someone stuck
into the stdlib 15 years ago. The main difference is that you call
open_channel
and get back aput_channel
and aget_channel
, whichact like the two ends of a stream, and have their own closure states.
Also, you can
clone()
an endpoint to extend the closure tracking tofan-in/fan-out scenarios. If you don't care about any of this, you can
use it exactly like our current
Queue
, except that you pass aroundthe put and get ends seperately. But it also allows for fancier things
like this fan-in example:
Decisions in this first draft (not necessarily good ones):
put()
calls on the same handle immediatelyraise
ClosedResourceError
get
continue to drain any queued dataEndOfChannel
__anext__
raisesStopAsyncIteration
instead ofEndOfChannel
get()
calls on the same handle immediatelyraise
ClosedResourceError
close()
on__del__
ResourceWarning
s would be annoying for users who don'tcare about the close functionality
discards the data -- but maybe it should cause the last
get_channel.close()
to raiseBrokenChannelError
instead?This "always_abort=" thing is kind of half-baked and needs more
review... there are more things that mess with _abort_func than I
realized, and I kind of hacked at them until they worked. For example,
it's kind of weird and non-obvious that if you use always_abort=True,
then