-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tokio or threads? #1
Comments
Given that the most popular http client library ( The nice thing about Rust is that threads and tokio can interoperate pretty easily; deferring to thread, or running a future in a tokio core on it's own thread are both easily possible |
Ah, thanks! So I had an idea on the sans-io approach, after reading some of the advice on Brett Cannon's docs (linked above). Suppose we define the "wormhole core" object to take a set of mostly "send and forget" API calls, and a single call to retrieve the "action queue". For any given runtime/concurrency environment, the Core will be wrapped in a layer that knows about IO. That layer also exposes a concurrency-style-appropriate API to the app (maybe Deferreds, maybe Futures, maybe Delegation-style). Ignoring for a moment the Transit and new Dilation features, I think the core API would look like this:
and the rule is, after you've called one or more of the API functions, you're required to call That Actions enum would have the following variants:
A Deferred-style wrapper would react to the various To add Transit/Dilation, I think we add three more actions (TCPOpen, TCPSendBytes, TCPClose) and some corresponding input APIs. And we might also need a timer: In each case, there's some driver that has to know how the IO and timers work. In Twisted that maps to the Reactor. In Rust, it could spin out a thread for each IO channel, maybe the top-level driver lives in a thread of its own (communicating with the actual application through a channel) and spends all of its time select-blocking on one of (request from app channel, response from a websocket/TCP channel, response from timer channel), and after any one of those sources yields an event, it notifies the WormholeCore and then drains the action queue, before going back to the select. |
Oh, flow-control will complicate the Transit/Dilation APIs. If we provide something like Twisted's IPushProducer model ( And we need extra API surface to manage Subchannels. In Twisted we handle this with Endpoints, so
and Actions would grow:
I think that could all be mapped to Endpoints. Allocating the subchannel without opening it is a bit weird. Maybe we could use a different sort of handle to bind those parts together ( |
Oops, that commit referenced an issue on my fork and shouldn't have closed it. Anyway, that PR still closes this issue (with some other commits though). Basically, I've decided to use async-std instead of tokio because it's more intuitive (async-std didn't even exist at the time this issue was discussed lol). I've decided against sans-io approaches to keep the implementation more simple for now. |
The choice of concurrency/scheduling mechanism is probably the most-significant bit of the design. I'm coming from 15 years of programming under Twisted (the primary "event-loop" / non-threaded approach in the Python world, slowly being augmented/enhanced/coopted by
asyncio
), so my instinct is to look for the most Twisted-like system in the Rust world, and apparently that's Tokio. I know there's a learning curve (I remember going through that with Twisted), but my hunch is that it's the right direction to go, because it's my experience that threads are always the wrong choice.Except.. things are a lot better in the Rust world, and threads are not necessarily immediately fatal. And golly, Tokio is pretty daunting. So I could be convinced that, instead, each part of magic-wormhole that waits on a different source should run in its own thread. I can't readily imagine how that'd all fit together (does most of the logic live in a single thread that gets spawned off when you create the Wormhole object? and then each TCP connection attempt lives in a separate one? and does the top-most API run through a channel of some sort?), but I'm sure lots of other projects do it that way, so it must be possible. We'd start with the
ws
crate instead ofwebsocket
.One constraint to keep in mind is that we want this library to be usable from other languages: this should provide a core of functionality, to which other folks can write bindings (from Node, or Ruby, etc). Other environments are going to have other approaches to networking (and concurrency). So we should try to write the main part of this in "sans-io" style, with somewhat-swappable portions written to the actual IO functions (the Python implementation is not very sans-io, but all the state machines interface to the websocket connection in just one place, so it might not be too hard to fix that). Again, I'm not sure quite what the result will look like: probably a core that accepts calls like "hey I just got a websocket message for you", and a separate piece that takes a URL and makes the websocket connection and sends the results to the first piece, and then the do-it-all-within-Rust library glues the two of them together (presenting a single
wormhole::create()
API), while the e.g. Lua bindings would expose just the core, but also provide a do-it-all-within-Lua API that uses a Lua-specific websocket thing glued to the core bindings.The text was updated successfully, but these errors were encountered: