Skip to content
This repository has been archived by the owner on Aug 1, 2024. It is now read-only.

Support tokio #153

Closed
jasta opened this issue Jul 12, 2023 · 19 comments
Closed

Support tokio #153

jasta opened this issue Jul 12, 2023 · 19 comments

Comments

@jasta
Copy link

jasta commented Jul 12, 2023

General tracking issue to work on support for tokio in the esp-rs ecosystem. As I mentioned in #9 (comment), I'm looking to get this off the ground by first introducing a solid poll implementation in mio, then fixing things along the path in tokio, socket2, etc.

@jasta
Copy link
Author

jasta commented Jul 12, 2023

Looks like your socket2 changes weren't merged either: rust-lang/socket2@master...esp-rs-compat:socket2:master . Just bumped into this one as I'm working to get a demo of tokio + esp32 off the ground. Any gotchas I should be aware of if I try to upstream this work?

@jasta
Copy link
Author

jasta commented Jul 13, 2023

I've almost got it working with my forks but it's crashing at runtime. I'll post my sample repo and keep iterating tomorrow...

@jasta
Copy link
Author

jasta commented Jul 13, 2023

Looks like it's failing because you need to call esp_vfs_eventfd_register before eventfd will work. That's news to me, that feels like a pretty surprising little gotcha folks will run into. Hmm, I wonder if smol or polling are doing that for you somehow???

@ivmarkov
Copy link
Owner

Looks like it's failing because you need to call esp_vfs_eventfd_register before eventfd will work. That's news to me, that feels like a pretty surprising little gotcha folks will run into. Hmm, I wonder if smol or polling are doing that for you somehow???

They are not. I'm calling that manually as well.

@ivmarkov
Copy link
Owner

Looks like your socket2 changes weren't merged either: rust-lang/socket2@master...esp-rs-compat:socket2:master . Just bumped into this one as I'm working to get a demo of tokio + esp32 off the ground. Any gotchas I should be aware of if I try to upstream this work?

Yes: my changes are against an older version of socket2 (0.3 if I'm not mistaken). You'll probably need these changes either into their 0.4 or their 0.5 (master) branch. Not sure what tokio depends on, and I wasn't aware that tokio needs socket2 in the first place.

@ivmarkov
Copy link
Owner

While support for tokio is certainly very nice as some smaller, network-related crates can be utilized on the ESP IDF with little to no changes then, I assume you realize that you'll still need a variation of async-task as the main executor, and tokio running via the smol "compat" layer?

I mean, is tokio's executor customizable to that level? The first wave of async drivers for esp-idf-hal will have their wakers awoken directly from an ISR. So unless you can customize tokio's task scheduler in a similar fashion, you still need async-task (in an incarnation identical or similar to edge-executor) at the helm, or we need to think of a way where the async drivers' ISRs don't awake the wakers directly, but do some trick like awaking a high priority FreeRtos thread/task (possibly via the lightweight FreeRtos task notification mechanism or via an event group), which task - in turn - calls waker.wake() on the driver and thus does the executor awakening and task scheduling in a normal thread execution context.

@jasta
Copy link
Author

jasta commented Jul 13, 2023

While support for tokio is certainly very nice as some smaller, network-related crates can be utilized on the ESP IDF with little to no changes then, I assume you realize that you'll still need a variation of async-task as the main executor, and tokio running via the smol "compat" layer?

The main reason for my work is just to shave down the sharp corners in the async story for esp32 specifically and embedded platforms more generally. rust-esp32-std-demo is a good place to organize the discussion around the async story in general I think (as opposed to the work in esp-idf-hal which is great but doesn't speak much to async as a whole). I'm sure you're aware that right now in Rust it's nigh impossible to build an ergonomic and high quality networking library that is also agnostic to a particular async framework. I started my journey looking for a good re-usable CoAP library that I can port around with me to various embedded projects and I quickly realized this just isn't realistic for so many reasons -- most of them bad ones. Then I started looking at matter-rs and ofc that meets the same fate:

https://github.com/project-chip/matter-rs/blob/sequential/matter/Cargo.toml#L53
https://github.com/project-chip/matter-rs/blob/sequential/matter/src/transport/runner.rs#L77

So what I'm really hoping for here is to just make this story a little neater overall by not making it impossible to even compile code that might have happened to use tokio net, time, etc as if we're somehow asserting that it's bad to choose tokio's facilities for these things but good to choose smol or vice versa. Doesn't really make sense to me why it should even matter which one you use so long as you abstract the actual executor part away and design with good test-driven practices so you can run the core of your code on the host machine without a fancy embedded executor.

@jasta
Copy link
Author

jasta commented Jul 13, 2023

Not sure if this helps or hurts my point but I just realized you committed to the matter-rs implementation that is getting itself so tightly coupled with smol and esp-idf. Hopefully we can at least agree that it would be much better if Rust's async story was a little bit more consistent across platforms. Anyway, happy hacking :P

@ivmarkov
Copy link
Owner

The main reason for my work is just to shave down the sharp corners in the async story for esp32 specifically and embedded platforms more generally. rust-esp32-std-demo is a good place to organize the discussion around the async story in general I think (as opposed to the work in esp-idf-hal which is great but doesn't speak much to async as a whole).

I don't have any preference where the discussion is handled, although in a way rust-esp32-std-demo is in my space, while the rest of the stuff is at least in the esp-rs echosystem. Which rust-esp32-std-demo is demoing, after all.

With that said, I'm a bit perplexed by your statement that esp-idf-hal doesn't speak much to async. As I just tried to explain, the async story of esp-idf-hal WILL require (in fact already DOES - as the gpio driver is already released) the executor to have certain capabilities - which I'm raising the point that tokio might not have; and contributes to edge-executor this. tokio + ISR possibly not playing well is of course not the end of the world, as you can combine tokio with async-task, but still - FYI of sorts.

The thing is, my view on what is async on embedded has certainly evolved to include other stuff besides networking. What happens if you want to schedule networking async workloads together with embedded driver async workloads?

I'm sure you're aware that right now in Rust it's nigh impossible to build an ergonomic and high quality networking library that is also agnostic to a particular async framework. I started my journey looking for a good re-usable CoAP library that I can port around with me to various embedded projects and I quickly realized this just isn't realistic for so many reasons -- most of them bad ones.

I don't necessarily agree with that. It is much more work, involves a lot of generic metaprogramming, but IS possible. It is just that folks are lazy and choose the easy path. Not having a single, "standard" async API (ideally no_std compatible) for networking and FS IO is IMO the culprit, not the executor. Where the former is probably is blocked on AFIT and friends. Including dyn async traits (to lessen the generics pressure) - which are not even on the roadmap indeed.

Then I started looking at matter-rs and ofc that meets the same fate:

https://github.com/project-chip/matter-rs/blob/sequential/matter/Cargo.toml#L53 https://github.com/project-chip/matter-rs/blob/sequential/matter/src/transport/runner.rs#L77

Yeah. Thanks for citing my own code. :-)

Did you notice this though? It gives you a lower level API, which you can arrange with whatever async executor / task spawner you want. Which of course brings my earlier topic that we have a certain laziness (or maybe unwillingness to take too much complexity) on both sides - library creators and library consumers.

By the way - w.r.t emulating task spawning with select and ending up with a "one giant future": that might be an OK compromise in the embedded space - specifically for networking - because:

  • MCU resources are so scarce, that you probably want all your networking code (and more!) scheduled on a single thread, in an executor which is local to that thread. Which by the way relieves you from futures needing to be Send + 'static which pays well with no_std
  • The extra latency you'll get this way (as the giant future has a big internal state and polling it is more expensive) is probably OK, at least for a networking code in embedded, where high latencies are tolerated

So what I'm really hoping for here is to just make this story a little neater overall by not making it impossible to even compile code that might have happened to use tokio net, time, etc as if we're somehow asserting that it's bad to choose tokio's facilities for these things but good to choose smol or vice versa. Doesn't really make sense to me why it should even matter which one you use so long as you abstract the actual executor part away and design with good test-driven practices so you can run the core of your code on the host machine without a fancy embedded executor.

So first of all - absolutely! I'm not saying bringing tokio to ESP IDF (or embedded in general) does not have value - quite the opposite!

I'm just stating that the road is longer than that, and more complex than that. One reason is - as I mentioned multiple times - non-networking, driver-based code. The other is - by the way - that STD-only, tokio (or async-std) based crates are often not optimized for embedded. Meaning, they box and arc like mad pushing too much pressure on the alocator and bringing unpredictability at runtime w.r.t. OOMs due to heap fragmentation and general incapability to estimate your heap memory consumption statically. And folks are complaining, you know. But then I'm also complaining, as Rust's placement-new story sucks too.

@ivmarkov
Copy link
Owner

ivmarkov commented Jul 13, 2023

Not sure if this helps or hurts my point but I just realized you committed to the matter-rs implementation that is getting itself so tightly coupled with smol and esp-idf. Hopefully we can at least agree that it would be much better if Rust's async story was a little bit more consistent across platforms. Anyway, happy hacking :P

Sorry if I'm a bit blunt here, but apparently you do not understand the matter-rs code base (yet). There is nothing in matter-rs that tightly couples it with either ESP IDF or smol. You are looking at optional dependencies. Quite the opposite: everything there is executor-agnostic, reactor-agnostic and networking-stack agnostic (I mean, in the no_std and sequential branches, which is my work). Hell, the baremetal embedded-esp folks are running these with embassy-net and (CORRECTED - used to say smol) smoltcp!

Maybe you should start by looking at the README file in the main branch that explains the minimal requirements of matter-rs w.r.t. the underlying platform. :-)

I also don't have any vested interest to make matter-rs bound to the ESP IDF. For one, I'm not an Espressif emplyee. It just so happens that I often hack on the ESP32, so the first embedded demo of matter-rs happens to compile for the ESP IDF. Hopefully other folks will contribute for other MCUs.

If I wanted to bind matter-rs to Espressif, I would've been working in the exact opposite direction, don't you think? I would've kept matter-rs STD-only and allocating like mad (as ESP IDF is the only MCU framework which is STD compatible - which is by the way also my work).

I did exactly the opposite - no_std, no allocations, and async. I.e. compatibility with Embassy and whatnot echo-systems. :-)

Hope that clarifies my position and philosophy w.r.t. async Rust on embedded.

@ivmarkov
Copy link
Owner

ivmarkov commented Jul 13, 2023

@Dirbaio @lulf Sorry guys for pulling you here out of nowhere, but really - nobody from the Embassy echosystem having any interest of assembling an onoff_light demo on top of the sequential branch of matter-rs ^^^? The pre-reqs for running matter-rs on any MCU platform are listed here.

It seems that the existing, ESP IDF-only demo is giving a very wrong impression w.r.t. matter-rs philosophy and dependencies to folks in the embedded space.

I was looking lately that you have the W5500 ethernet driver running with embassy-net. And you've got network stack offloading to an ESP32 MCU via an embassy-net driver as well.

I would have implemented an embassy-net demo (ideally - for a non-ESP32 host MCU!) myself, but I only have so much free time. :-(

But I can support of course. :-)

@ivmarkov
Copy link
Owner

One roadblock of sorts is that the current matter-rs codebase needs ~ 140K (!) of memory. Reducing the number of sessions from 16 to 8 and the number of exchanges - say - from 8 to 4 should reduce this twice or so, though.

@jasta
Copy link
Author

jasta commented Jul 13, 2023

Thanks for all the info @ivmarkov, it's gonna take me a bit to digest all of this. I am somewhat new to this space and am responding mostly to how difficult it is to be productive as a beginner. I come from an Android background where early on (10+ years ago) the story was very similar: developers had a lot of cognitive overhead to build anything and reusing components across desktop /server use cases was effectively impossible because of memory constraints and incompatible APIs.

So what I'm trying to improve is that it's a lot of work to build quality reusable libraries. Just like I saw with Android is that if you make it too nuanced and heady to do the right thing you'll just end up with a lot of junk in the ecosystem and the platform stagnates. I understand things like no_std and reducing the use of alloc is always going to be a bit of extra design work, but the async story IMO is adding even more headache without a very good justification (why for example are there seemingly so many implementations of the exact same I/O code across mio, polling, socket2, etc that each have their own quirks and gotchas???)

@ivmarkov
Copy link
Owner

No prob, and if you could keep up the good work on the tokio-to-poll port, that would be really appreciated! :)

@jasta
Copy link
Author

jasta commented Jul 13, 2023

I don't necessarily agree with that. It is much more work, involves a lot of generic metaprogramming, but IS possible. It is just that folks are lazy and choose the easy path. Not having a single, "standard" async API (ideally no_std compatible) for networking and FS IO is IMO the culprit, not the executor. Where the former is probably is blocked on AFIT and friends. Including dyn async traits (to lessen the generics pressure) - which are not even on the roadmap indeed.

I thought a lot more about this statement and I think I might've realized my misunderstanding. Is it the case that using tokio-net (i.e. a simple hello world that uses UdpSocket from tokio instead of async-io) will make it so that you must use tokio's runtime? And therefore tokio-net really is incompatible with esp-idf because you realistically won't be able to mix-in embedded async stuff (like responding to gpio or whatever). If that's the case then yes I definitely see your arguments here and truly this would make the async IO library you choose "toxic" (that is, your choice of async IO necessarily limits where your and how your library can work)? I'm going to experiment a bit with edge-net you linked and try to understand better what happens if you wanted to use tokio + tokio-console to debug code running on the host. If it can't be made to work, then indeed that seems like the problem to solve in the Rust async ecosystem...

@ivmarkov
Copy link
Owner

ivmarkov commented Jul 13, 2023

I don't necessarily agree with that. It is much more work, involves a lot of generic metaprogramming, but IS possible. It is just that folks are lazy and choose the easy path. Not having a single, "standard" async API (ideally no_std compatible) for networking and FS IO is IMO the culprit, not the executor. Where the former is probably is blocked on AFIT and friends. Including dyn async traits (to lessen the generics pressure) - which are not even on the roadmap indeed.

I thought a lot more about this statement and I think I might've realized my misunderstanding. Is it the case that using tokio-net (i.e. a simple hello world that uses UdpSocket from tokio instead of async-io) will make it so that you must use tokio's runtime?

I assume by tokio's "runtime" you mean the Executor of tokio and then - no - that's (fortunately) not the case. You can use tokio-net (tokio's Reactor & async networking API) with a 3rd party executor, like async-task. It is not all roses and requires a small hack, but works.

In fact - this is what I was suggesting as well right? - that folks can use my tailored async executor which is based on smol's async-task with the Reactor of tokio, if they need to poll futures that deal with interrupt service requests in addition to polling networking futures coming from tokio-net. This crate (the "hack") would be necessary though. What I wanted to emphasize is that tokio's runtime (executor) does not support the ISR use case, that's all.

BTW what I don't remember is what happens if the particular crate also depends not just on tokio's reactor (tokio-net) but on the executor (a.k.a. "rt") of tokio as well. (As in using tokio::spawn to spawn tasks.). This should be checked - as in whether the async-compat crate also has a trick for that.

And therefore tokio-net really is incompatible with esp-idf because you realistically won't be able to mix-in embedded async stuff (like responding to gpio or whatever). If that's the case then yes I definitely see your arguments here and truly this would make the async IO library you choose "toxic" (that is, your choice of async IO necessarily limits where your and how your library can work)?

Again, I did not mean that and sorry for the confusion. In any case, support for tokio::spawn has to be checked in smol's compat crate.

I'm going to experiment a bit with edge-net you linked and try to understand better what happens if you wanted to use tokio + tokio-console to debug code running on the host. If it can't be made to work, then indeed that seems like the problem to solve in the Rust async ecosystem...

What edge-net and matter-rs are trying to do is much more ambitious - they are trying to not even commit not just to a particular executor, but also to a particular Reactor (= async networking library = e.g. tokio-net). This is a difficult, but real problem perhaps best explained with the matter-rs use-case. matter-rs has to be usable on a very wide range of platforms: from a core-only baremetal MCU, to a STD-compatible embedded Linux. How do you support all of that? You build abstractions so that users can "plug" their own async networking stack in your library. That's what I meant by possible, but difficult and raising the cognitive load when designing and then using the library.

The above ^^^ challenge would become easier once AFIT is stabilized and stuff like e.g. embedded-nal-async picks up steam. It won't be ideal as the library would be coded against traits (more generics) but still much better than the current status quo.

@jasta
Copy link
Author

jasta commented Jul 13, 2023

Wow, this all is really helpful. I've been doing a lot of reading today from your comments and I want to thank you for being patient with me. I see now that tokio's reactor (mio+glue) is analogous to smol's reactor (async-io+polling) and these concepts are independent of the executor mostly. Is it fair to say that if we had tokio's reactor working without non-upstreamed patches that it would be a preferred path over smol (which currently is requiring patches)?

Also re matter-rs and the dependency entanglement, I totally get the difficulty of making this library agnostic, apologies for coming off as glib. In a past life I did a lot of work creating modern apps supporting incredibly old and outdated Android phones and much of that work centered around carefully pruning dependencies and injecting implementation behaviour all throughout complex library chains. I'm slowly realizing that where I keep saying it's a Rust async problem I really mean more of the ball of mud that is altogether the challenges with no_std (no access to std I/O), tight RAM requirements (awkward trade-offs with devex/features), async being unstable (no AFIT in stable and lots of quirks around associated types), and the numerous quirks of each embedded platform's core API (randomly missing APIs, broken behaviour you need to work around, etc). I think matter-rs is overall headed in the right direction with the goals of a no_std pure core (core business logic and packet formatting) and an async layer on top (a framework you can easily use). Though, if I may, one small criticism remains that it'd be good to break out the crates a bit to really prove that multiple different platforms can work. Even simple stuff like moving esp-idf stuff out of the root and into it's own subdir would help as I've done in one of my hobby projects to learn Rust: https://github.com/jasta/esp32-balboa-spa/blob/main/Cargo.toml#L13-L17.

Again, sincere thanks for your patience!

@jasta
Copy link
Author

jasta commented Jul 14, 2023

Closing this issue out for now and moving the rest of the discussion of logistics to: tokio-rs/tokio#5867. Let's revisit where things are at if I'm able to get this landed :)

@jasta jasta closed this as completed Jul 14, 2023
@ivmarkov
Copy link
Owner

ivmarkov commented Jul 14, 2023

Also re matter-rs and the dependency entanglement, I totally get the difficulty of making this library agnostic, apologies for coming off as glib. In a past life I did a lot of work creating modern apps supporting incredibly old and outdated Android phones and much of that work centered around carefully pruning dependencies and injecting implementation behaviour all throughout complex library chains. I'm slowly realizing that where I keep saying it's a Rust async problem I really mean more of the ball of mud that is altogether the challenges with no_std (no access to std I/O), tight RAM requirements (awkward trade-offs with devex/features), async being unstable (no AFIT in stable and lots of quirks around associated types), and the numerous quirks of each embedded platform's core API (randomly missing APIs, broken behaviour you need to work around, etc).

I think that's a pretty fair assessment of the status quo.

Though, if I may, one small criticism remains that it'd be good to break out the crates a bit to really prove that multiple different platforms can work. Even simple stuff like moving esp-idf stuff out of the root and into it's own subdir would help as I've done in one of my hobby projects to learn Rust: https://github.com/jasta/esp32-balboa-spa/blob/main/Cargo.toml#L13-L17.

You mean, isolating the esp-idf stuff as a separate crate in the workspace?

Sometimes that's possible - that is, when the core crates do not have any dependency on the ESP IDF. For the ESP IDF examples we can certainly do that, once we get more examples - as in - the examples of each platform in its separate folder.

But sometimes the tradeoff is not so easy. Say, the core crates need an mDNS responder. On ESP IDF, we should ideally use the ESP IDF one, while on Linux - say - the Avahi one. One way to abstract the mDNS responder in the core crates is to use traits. However if the functionality you are abstracting is async, you need async traits. Unfortunately, AFIT is not available on Rust stable yet. Worse - even if it was available, it would not have dyn support initially. That means I need to propagate the generic param for the mDNS responder all the way up the matter stack. If I do the same for - say - the UDP networking stack, persistence and so on, the Matter stack ends up generified with 3-4-5 generic variables, which is already quite a bit of a cognitive burden for the user. And that's IF AFIT was available.

An alternative is to fold all supported mDNS implementations directly in the core crates and have #[cfg(target_os = "espidf")] and suchlike. From the POV of the rest of the code, the mDNS struct would have the same pub(crate) API accross all platforms, just the implementation would be different. Basically, take the same approach as in the Rust Standard Library. Not pretty for the library maintainer, but easy on the user, because if their platform is supported, thy get a simple API which has few or no generics.

So as much as I want to have clean core crates, it is just not as easy as it sounds and it is all about difficult tradeoffs, that are often use-case specific.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants