-
Notifications
You must be signed in to change notification settings - Fork 107
Support tokio #153
Comments
Looks like your socket2 changes weren't merged either: rust-lang/socket2@master...esp-rs-compat:socket2:master . Just bumped into this one as I'm working to get a demo of tokio + esp32 off the ground. Any gotchas I should be aware of if I try to upstream this work? |
I've almost got it working with my forks but it's crashing at runtime. I'll post my sample repo and keep iterating tomorrow... |
Looks like it's failing because you need to call esp_vfs_eventfd_register before eventfd will work. That's news to me, that feels like a pretty surprising little gotcha folks will run into. Hmm, I wonder if smol or polling are doing that for you somehow??? |
They are not. I'm calling that manually as well. |
Yes: my changes are against an older version of socket2 (0.3 if I'm not mistaken). You'll probably need these changes either into their 0.4 or their 0.5 (master) branch. Not sure what tokio depends on, and I wasn't aware that tokio needs socket2 in the first place. |
While support for tokio is certainly very nice as some smaller, network-related crates can be utilized on the ESP IDF with little to no changes then, I assume you realize that you'll still need a variation of I mean, is tokio's executor customizable to that level? The first wave of async drivers for |
The main reason for my work is just to shave down the sharp corners in the async story for esp32 specifically and embedded platforms more generally. rust-esp32-std-demo is a good place to organize the discussion around the async story in general I think (as opposed to the work in esp-idf-hal which is great but doesn't speak much to async as a whole). I'm sure you're aware that right now in Rust it's nigh impossible to build an ergonomic and high quality networking library that is also agnostic to a particular async framework. I started my journey looking for a good re-usable CoAP library that I can port around with me to various embedded projects and I quickly realized this just isn't realistic for so many reasons -- most of them bad ones. Then I started looking at matter-rs and ofc that meets the same fate: https://github.com/project-chip/matter-rs/blob/sequential/matter/Cargo.toml#L53 So what I'm really hoping for here is to just make this story a little neater overall by not making it impossible to even compile code that might have happened to use tokio net, time, etc as if we're somehow asserting that it's bad to choose tokio's facilities for these things but good to choose smol or vice versa. Doesn't really make sense to me why it should even matter which one you use so long as you abstract the actual executor part away and design with good test-driven practices so you can run the core of your code on the host machine without a fancy embedded executor. |
Not sure if this helps or hurts my point but I just realized you committed to the matter-rs implementation that is getting itself so tightly coupled with smol and esp-idf. Hopefully we can at least agree that it would be much better if Rust's async story was a little bit more consistent across platforms. Anyway, happy hacking :P |
I don't have any preference where the discussion is handled, although in a way With that said, I'm a bit perplexed by your statement that The thing is, my view on what is async on embedded has certainly evolved to include other stuff besides networking. What happens if you want to schedule networking async workloads together with embedded driver async workloads?
I don't necessarily agree with that. It is much more work, involves a lot of generic metaprogramming, but IS possible. It is just that folks are lazy and choose the easy path. Not having a single, "standard" async API (ideally no_std compatible) for networking and FS IO is IMO the culprit, not the executor. Where the former is probably is blocked on AFIT and friends. Including dyn async traits (to lessen the generics pressure) - which are not even on the roadmap indeed.
Yeah. Thanks for citing my own code. :-) Did you notice this though? It gives you a lower level API, which you can arrange with whatever async executor / task spawner you want. Which of course brings my earlier topic that we have a certain laziness (or maybe unwillingness to take too much complexity) on both sides - library creators and library consumers. By the way - w.r.t emulating task spawning with
So first of all - absolutely! I'm not saying bringing tokio to ESP IDF (or embedded in general) does not have value - quite the opposite! I'm just stating that the road is longer than that, and more complex than that. One reason is - as I mentioned multiple times - non-networking, driver-based code. The other is - by the way - that STD-only, tokio (or async-std) based crates are often not optimized for embedded. Meaning, they box and arc like mad pushing too much pressure on the alocator and bringing unpredictability at runtime w.r.t. OOMs due to heap fragmentation and general incapability to estimate your heap memory consumption statically. And folks are complaining, you know. But then I'm also complaining, as Rust's placement-new story sucks too. |
Sorry if I'm a bit blunt here, but apparently you do not understand the Maybe you should start by looking at the I also don't have any vested interest to make If I wanted to bind I did exactly the opposite - no_std, no allocations, and Hope that clarifies my position and philosophy w.r.t. async Rust on embedded. |
@Dirbaio @lulf Sorry guys for pulling you here out of nowhere, but really - nobody from the Embassy echosystem having any interest of assembling an It seems that the existing, ESP IDF-only demo is giving a very wrong impression w.r.t. I was looking lately that you have the W5500 ethernet driver running with I would have implemented an But I can support of course. :-) |
One roadblock of sorts is that the current |
Thanks for all the info @ivmarkov, it's gonna take me a bit to digest all of this. I am somewhat new to this space and am responding mostly to how difficult it is to be productive as a beginner. I come from an Android background where early on (10+ years ago) the story was very similar: developers had a lot of cognitive overhead to build anything and reusing components across desktop /server use cases was effectively impossible because of memory constraints and incompatible APIs. So what I'm trying to improve is that it's a lot of work to build quality reusable libraries. Just like I saw with Android is that if you make it too nuanced and heady to do the right thing you'll just end up with a lot of junk in the ecosystem and the platform stagnates. I understand things like no_std and reducing the use of alloc is always going to be a bit of extra design work, but the async story IMO is adding even more headache without a very good justification (why for example are there seemingly so many implementations of the exact same I/O code across mio, polling, socket2, etc that each have their own quirks and gotchas???) |
No prob, and if you could keep up the good work on the tokio-to-poll port, that would be really appreciated! :) |
I thought a lot more about this statement and I think I might've realized my misunderstanding. Is it the case that using tokio-net (i.e. a simple hello world that uses UdpSocket from tokio instead of async-io) will make it so that you must use tokio's runtime? And therefore tokio-net really is incompatible with esp-idf because you realistically won't be able to mix-in embedded async stuff (like responding to gpio or whatever). If that's the case then yes I definitely see your arguments here and truly this would make the async IO library you choose "toxic" (that is, your choice of async IO necessarily limits where your and how your library can work)? I'm going to experiment a bit with edge-net you linked and try to understand better what happens if you wanted to use tokio + tokio-console to debug code running on the host. If it can't be made to work, then indeed that seems like the problem to solve in the Rust async ecosystem... |
I assume by tokio's "runtime" you mean the Executor of tokio and then - no - that's (fortunately) not the case. You can use In fact - this is what I was suggesting as well right? - that folks can use my tailored async executor which is based on smol's BTW what I don't remember is what happens if the particular crate also depends not just on tokio's reactor (
Again, I did not mean that and sorry for the confusion. In any case, support for
What The above ^^^ challenge would become easier once AFIT is stabilized and stuff like e.g. |
Wow, this all is really helpful. I've been doing a lot of reading today from your comments and I want to thank you for being patient with me. I see now that tokio's reactor (mio+glue) is analogous to smol's reactor (async-io+polling) and these concepts are independent of the executor mostly. Is it fair to say that if we had tokio's reactor working without non-upstreamed patches that it would be a preferred path over smol (which currently is requiring patches)? Also re matter-rs and the dependency entanglement, I totally get the difficulty of making this library agnostic, apologies for coming off as glib. In a past life I did a lot of work creating modern apps supporting incredibly old and outdated Android phones and much of that work centered around carefully pruning dependencies and injecting implementation behaviour all throughout complex library chains. I'm slowly realizing that where I keep saying it's a Rust async problem I really mean more of the ball of mud that is altogether the challenges with no_std (no access to std I/O), tight RAM requirements (awkward trade-offs with devex/features), async being unstable (no AFIT in stable and lots of quirks around associated types), and the numerous quirks of each embedded platform's core API (randomly missing APIs, broken behaviour you need to work around, etc). I think matter-rs is overall headed in the right direction with the goals of a no_std pure core (core business logic and packet formatting) and an async layer on top (a framework you can easily use). Though, if I may, one small criticism remains that it'd be good to break out the crates a bit to really prove that multiple different platforms can work. Even simple stuff like moving esp-idf stuff out of the root and into it's own subdir would help as I've done in one of my hobby projects to learn Rust: https://github.com/jasta/esp32-balboa-spa/blob/main/Cargo.toml#L13-L17. Again, sincere thanks for your patience! |
Closing this issue out for now and moving the rest of the discussion of logistics to: tokio-rs/tokio#5867. Let's revisit where things are at if I'm able to get this landed :) |
I think that's a pretty fair assessment of the status quo.
You mean, isolating the esp-idf stuff as a separate crate in the workspace? Sometimes that's possible - that is, when the core crates do not have any dependency on the ESP IDF. For the ESP IDF examples we can certainly do that, once we get more examples - as in - the examples of each platform in its separate folder. But sometimes the tradeoff is not so easy. Say, the core crates need an mDNS responder. On ESP IDF, we should ideally use the ESP IDF one, while on Linux - say - the Avahi one. One way to abstract the mDNS responder in the core crates is to use traits. However if the functionality you are abstracting is An alternative is to fold all supported mDNS implementations directly in the core crates and have So as much as I want to have clean core crates, it is just not as easy as it sounds and it is all about difficult tradeoffs, that are often use-case specific. |
General tracking issue to work on support for tokio in the esp-rs ecosystem. As I mentioned in #9 (comment), I'm looking to get this off the ground by first introducing a solid poll implementation in mio, then fixing things along the path in tokio, socket2, etc.
The text was updated successfully, but these errors were encountered: