Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async I/O #399

Closed
killercup opened this issue Aug 9, 2016 · 147 comments
Closed

Async I/O #399

killercup opened this issue Aug 9, 2016 · 147 comments

Comments

@killercup
Copy link
Member

With futures-rs and tokio the Rust ecosystem recently got a whole new chapter in the async IO story. It would be pretty great for diesel to work with this as well.

From an enduser standpoint, a straightforward API change would be to 'just' use an Async PgConnection instead of the regular PgConnection (DB specific for more efficient implementations?). All traits generic over Connection (e.g. LoadDsl) would get an associated type to determine if they are returning e.g. a QueryResult<T> or a Future<T>.

@sgrif
Copy link
Member

sgrif commented Aug 11, 2016

Worth noting that in the async case I'd also like load and get_results to return some sort of stream instead of Future<Vec<T>>, as it frees me up to do cursory things (granted, even just a future is fine). This of course will require re-implementing the postgresql adapter to not rely on libpq (which I'm already working on for other reasons, not sure if I've spoken about it outside of the podcast). It also will not be possible with sqlite.

@chpio
Copy link

chpio commented Aug 11, 2016

@sgrif maybe something like the "async iterator" in JS? Which would return an Iterator<Item=Future<T>>, the consumer would then call await on each future, but for this to work we would need compiler support for await. We could also make a simple helper function which would take an Iterator<Future> and a closure and iterate over the Iterator and calling the closure when the Future is ready.

@sgrif
Copy link
Member

sgrif commented Aug 11, 2016

That's a possibility as well, but it's untrue to how I'll actually have access to the data, which is in batches. If we're doing true async IO I also will not have the length ahead of time, so it'd have to be an actual iterator not Vec<Future>, at which point why not just do a true stream?

@killercup
Copy link
Member Author

Please note that futures-rs also has a Stream trait!

@sgrif
Copy link
Member

sgrif commented Aug 11, 2016

Stream<T> is to Vec<T> as Future<T> is to Result<T, _> (or I guess Option<T> in this case)

@macobo
Copy link

macobo commented Jan 7, 2017

@sgrif off-topic, but curious as to the reasons for moving off of libpq. Could you also post a link to the podcast?

@beatgammit
Copy link

@macobo - I'm guessing it's The Bike Shed.

@sgrif
Copy link
Member

sgrif commented Feb 1, 2017

Sorry that I didn't reply -- I had an unexpected baby. Yes, that is the correct link to the podcast.

@pyrossh
Copy link

pyrossh commented Mar 1, 2017

Well there is tokio-postgres now which you can use for your async stuff.
https://docs.rs/tokio-postgres/0.2.1/tokio_postgres/

@yazaddaruvala
Copy link

Just want to clarify, will async make the 1.0 milestone?

@sgrif
Copy link
Member

sgrif commented Sep 27, 2017

No.

@sgrif
Copy link
Member

sgrif commented Jan 12, 2018

So... I've fully implemented a PoC for this twice now. I'm going to close this for the time being.

This is not a statement that async I/O will never happen, but it is a statement that it is not currently on the roadmap, and I do not consider this to be actionable at this time. Neither tokio nor futures have stable APIs at the moment (tokio in particular is supposedly going to have a major overhaul at some point).

At absolute minimum, we need to wait for things to settle down there. However, there are problems that need to be solved in those APIs as well. I found that in order to get to an API that I was happy with, I had to resort to some pretty nasty hacks which had a lot of gotchas and I'm not comfortable shipping. It seemed that ownership hits you way harder here as well. It's virtually impossible to have anything that isn't 'static. This might seem obvious or NBD, but consider for a moment that you basically never actually own the connection. Everything takes &connection (or more likely a PooledConnection<'a, T>).

With the state of things as they are today, we would also need to re-implement connection pooling to make this work. This is on top of what would already be an enormous amount of work. This is not just "make it async". When we switch to tokio, we can no longer use the client libraries provided to us by the database creators. We need to learn the wire protocol that database uses, and implement it at the TCP level.

Finally, I hear a lot of people saying that the lack of async support is the big blocker for them using Diesel, but I'm not buying it. PostgreSQL is the only backend likely to get an async driver any time soon (I do not know the MySQL wire protocol, and it is mostly undocumented). The tokio-postgres crate, which people would likely be using instead if async was really the blocker for them, has had 300 downloads in the past 3 months. I'm not saying that async isn't a thing worth doing, but I think people are perhaps overstating how important it is to them.

Anyway with all that said, I'm going to close this issue. Open issues in this repo should represent something that is actionable, and that you could reasonably open a pull request to fix. I do not consider this to be actionable with the state of async in Rust today. I will keep my eye on that situation, and if that changes, I will open a new issue for this. But you should not expect async Diesel in the near future.

@sgrif sgrif closed this as completed Jan 12, 2018
@yazaddaruvala
Copy link

I understand your frustrations with the current API, and understand you don't want to build async support just yet, only to re-do it later. All of that makes perfect sense, and this is your project so absolutely do what you feel is best.

I just want to point out that your rationale for "async isn't a big deal" is full of selection bias (i.e. inaccurate), and you're doing yourself a disservice by believing those stats.

Finally, I hear a lot of people saying that the lack of async support is the big blocker for them using Diesel, but I'm not buying it.

You're using the stats for tokio-postgres to infer the popularity of async needs and since it is not popular you infer that async is not a blocker.

However, this is inaccurate, people who view async as a blocker are not using sync-Rust instead. They are simply not using Rust.

At least for me, I do not use Rust for any web based applications because, while hyper is pretty good, there is no great ecosystem around async in Rust yet.

@sgrif
Copy link
Member

sgrif commented Jan 12, 2018

I want to re-iterate: I am not trying to say that "async isn't a big deal". I am trying to say it is not currently a priority, and it is not feasible for us to attempt to accomplish it at this point in time.

people who view async as a blocker are not using sync-Rust instead. They are simply not using Rust.

While you're probably right, and it may not have been a great stat to bring up, I'm not sure I agree with "if I can't use Diesel I'm just not going to use Rust" being the norm in this case.

I'm also not trying to use the stats for tokio-postgres to infer the popularity of async, I'm using to to give some number to the only async alternative available for the backend that we are most likely to support. Presumably there are a decent number of people who want all of:

  • Rust
  • PostgreSQL
  • Async I/O

This is what those people would be using.

But again, I'm probably calling too much attention to what was by far the smallest part of my reasoning for closing this (which I should also mention was discussed with the rest of the core team). This is mostly about it not being something we can do right now, which is why I wanted to close this issue. If circumstances change and async-diesel becomes feasible in the future, we'll open a new issue. But I want to avoid keeping issues open which cannot reasonably be closed with a pull request (this would of course be a very large pull request)

@batisteo
Copy link

Just a heads up on Pleingres, “An asynchronous, Tokio-based, 100% Rust postgres driver”, as mentioned in the Pijul blog.
It might be an inspiration or a base for the future work. Or not.

@beatgammit
Copy link

I don't know if this exists yet, but perhaps documentation (e.g. an example on the website) about how to integrate Diesel into an async project without blocking the whole event loop. If this example does connection pooling, then this seems like a reasonable solution until the async ecosystem in Rust stabilizes some.

I'm using Diesel on a site that I hope will have high volume database operations (mostly simple CRUD, but relatively high volume, as in potentially hundreds or even thousands more-or-less concurrently). I'll be figuring out how to do pooling properly, but I honestly wouldn't go through this effort if I wasn't dead-set on using Rust and another solution has an out-of-the-box solution (e.g. Go is "good enough").

And honestly, an async interface into a thread pool of synchronous connections is probably better for a high traffic site anyway, which is probably the bulk of the "async or bust" crowd. That's not to say that async in Diesel isn't useful, just that we can probably solve the problem for most people with documentation.

@fluxxu
Copy link
Contributor

fluxxu commented Jan 25, 2018

This method is considered as a workaround

Just use
https://github.com/alexcrichton/futures-rs/tree/master/futures-cpupool
to wrap diesel's operations, then you can use diesel with futures/tokio nicely.
Define something like

use futures::prelude::*;
use futures_cpupool::CpuPool;
use diesel;

pub type Conn = diesel::pg::PgConnection;

pub fn exec_async<T, E, F, R>(&self, f: F) -> impl Future<Item = T, Error = E>
  where
    T: Send + 'static,
    E: From<::diesel::result::Error> + Send + 'static,
    F: FnOnce(&Conn) -> R + Send + 'static,
    R: IntoFuture<Item = T, Error = E> + Send + 'static,
    <R as IntoFuture>::Future: Send,
  {
    lazy_static! {
      static ref THREAD_POOL: CpuPool = {
        CpuPool::new_num_cpus()
      };
    }

    let pool = /* clone a ref of diesel's connection pool */;
    THREAD_POOL.spawn_fn(move || {
      pool
        .get()
        .map_err(|err| E::from(err))
        .map(|conn| f(&conn))
        .into_future()
        .and_then(|f| f)
    })
  }

Then you can

exec_async(|conn| {
  diesel::insert_into(??).values(??).execute(conn)
}).and_then(...)

@ivanovaleksey
Copy link
Contributor

@vorot93 @friktor @lnicola is the example above wrong?

@chpio
Copy link

chpio commented Mar 26, 2018

@ivanovaleksey

  1. it's not real async code, we are talking about handling the whole connection asynchronously, not running blocking code on a threadpool
  2. you can't do more concurrent queries than there are cpu cores (CpuPool::new_num_cpus())

not saying it's "wrong"

@lnicola
Copy link

lnicola commented Mar 26, 2018

@ivanovaleksey I can't say it's correct, as I didn't test it (e.g. I'm not sure about the trait bounds). In theory it should work. However,

  • it won't work with the async database drivers (I know there are a couple of implementations)
  • it's rather verbose
  • CpuPool is fixed-size; in general I'd prefer a more flexible thread pool
  • after the tokio changes that are coming soon, it will look a bit different (but still with a fixed thread pool)

So it works, but I'm not a fan of that approach.

@vorot93
Copy link
Contributor

vorot93 commented Mar 27, 2018

@ivanovaleksey Pretty much what @chpio said. This code simply masks the blocking operations behind a thread pool. This doesn't help in the high load use case where large number of incoming DB queries will cause unacceptable delays for each of them.

@hayd
Copy link

hayd commented Jun 8, 2018

See also actix-diesel example https://github.com/actix/examples/blob/f8e3570bd16bcf816070af20f210ce1b4f2fca69/diesel/src/main.rs#L64-L70

Perhaps offering a canonical pool implementation might be useful, that way diesel could have async/futures API without using async drivers (yet).

This sync driver / async API is how slick (scala) works https://stackoverflow.com/a/29597868/1240268 and might suffice to appease the "people who view async as a blocker" (at least some of them).


Aside:

CpuPool is fixed-size; in general I'd prefer a more flexible thread pool

IME it's desirable to have a fixed size (with potentially multiple pools), that way all connections are made on startup... and generally your db server won't allow an unlimited number of connections anyway.

@chpio
Copy link

chpio commented Jun 8, 2018

https://tokio.rs/blog/2018-05-tokio-fs/#blocking

@a3kov
Copy link

a3kov commented Sep 10, 2018

This article by the SQLAlchemy author is a very good read on the topic (while it's about Python many facts listed there still hold in the Rust world):
http://techspot.zzzeek.org/2015/02/15/asynchronous-python-and-databases/

@Sytten
Copy link
Contributor

Sytten commented Oct 8, 2021

There is also the very common use case of the application is using an async framework like actix-web where each request doesn't get its own thread. If you use diesel without running the query in a thread pool then you will freeze your actix worker thread and you will hurt performances badly for other requests who are sharing the worker.

@Bajix
Copy link
Contributor

Bajix commented Oct 8, 2021

There is also the very common use case of the application is using an async framework like actix-web where each request doesn't get its own thread. If you use diesel without running the query in a thread pool then you will freeze your actix worker thread and you will hurt performances badly for other requests who are sharing the worker.

This is true of using Diesel within any async context; until an io uring non-blocking solution is viable, the only reasonable way of using Diesel is with tokio::task::spawn_blocking. To the extent that async can improve this before io uring is to make it so that we can get a connection that we then pass into tokio::task::spawn_blocking, and the performance diff is only that of having a parked thread while acquiring a connection instead of async awaiting that connection. There's nothing wrong with using a bunch of blocking threads in a pool; the only thing that really needs to be avoided is blocking within an async context.

@weiznich
Copy link
Member

weiznich commented Oct 9, 2021

@Sytten Can you provide some numbers for that claim? Otherwise this is just guessing.

@Bajix
Copy link
Contributor

Bajix commented Oct 10, 2021

@Sytten Can you provide some numbers for that claim? Otherwise this is just guessing.

Actix Web uses a thread-per-core single-threaded tokio runtime, which disables work-stealing, so when you block the runtime it's worse than in a standard multi-threaded tokio runtime. The thing is though, this isn't a common use case, it's an easily avoidable anti-pattern. Diesel should only be used in conjunction with tokio::task::spawn_blocking, tokio::task::block_in_place, std::thread::spawn or within a context where blocking doesn't matter, such as Lamdba functions.

@Sytten
Copy link
Contributor

Sytten commented Oct 11, 2021

I think @Bajix resumed it well, I use https://github.com/smol-rs/blocking personally but I know actix-diesel is a thing and actix also provides a web::block as per their documentation https://actix.rs/docs/databases/. Not sure how much more I can provide as info.
I will disagree that it is "uncommon" to ask for an ORM to be async if you live in the async world for the rest of your application and it will definitely hurt performances if you don't apply the mitigation strategies mentioned (it can be easy to miss since there is no warning/error that you are doing something wrong). As to if diesel wants to support this use-case or not is entirely up to you @weiznich :)

@ibraheemdev
Copy link

ibraheemdev commented Oct 11, 2021

I think @weiznich's point is whether making Diesel async from the ground up will make it any more performant than running queries on a blocking thread-pool. With a crate like tokio-diesel integration in an async environment is just as seamless.

@Bajix
Copy link
Contributor

Bajix commented Oct 11, 2021

I think @weiznich's point is whether making Diesel async from the ground up will make it any more performant than running queries on a blocking thread-pool. With a crate like tokio-diesel integration in an async environment is just as seamless.

Just speculation but I'd imagine that any async solution predicated on internally using a blocking thread pool would be at best marginally faster than doing this externally as is now done. Async connection management can definitely be improved, but that could be an external crate. I think a better question is what things would make an endeavor to go full async worthwhile, and that's where the possibility of using io uring looks of particular interest. Were Diesel to be full async on top of io uring, then this could facilitate thread-per-core architecture with syscall level async I/O, and that would be considerably faster, but AFAIK more work needs to be done on the io uring side.

@weiznich
Copy link
Member

I will disagree that it is "uncommon" to ask for an ORM to be async if you live in the async world for the rest of your application and it will definitely hurt performances if you don't apply the mitigation strategies mentioned (it can be easy to miss since there is no warning/error that you are doing something wrong). As to if diesel wants to support this use-case or not is entirely up to you @weiznich :)

@Sytten First of all it's your choice to life in an async world, not my choice. Again: Async is nothing that is really needed for handling many request, as long as you don't plan to handle larger amounts of traffic than for example crates.io. That written: Diesel supports this use case already by using the corresponding blocking method from whatever executor you use. It's possible to use diesel in that context, which does not say that is easy. The main point here is there is nothing in diesel itself that prevents you from building something on top of diesel to make it more fitting for your use case. So my question about having performance numbers is aimed at something that already uses those blocking like interfaces. If you can show that this is not sufficient for your use case we can talk about changing diesel itself.
Generally speaking: Async is nothing that I personally need for my use case. There are other things in diesel where I need improvements and where I work on improvements. That does not mean that I'm against a async diesel interface, but this is currently nothing I'm working on. To address your last sentence specifically: Diesel supports this use case already via using the using the blocking interfaces of your async executor. If you feel that it is to easy to miss this methods there is nothing stopping you to provide an actix-diesel or tokio-diesel like crate on your own build on top of diesel. I don't see any reason why this needs to be included in diesel itself. If you are thinking that someone else needs to provide this for you for free, then please think again what you expect from people in their free time. If you want me to implement that for you, see my github sponsors page for options to provide some consulting.

@ibraheemdev Thanks for clarifying that, that's exactly my point.
@Bajix Maybe io uring can help there, maybe not. We need to see that if it's ready some day. That's nothing relevant for now.

@ibraheemdev
Copy link

Would you be open to a PR integrating tokio/async-std-diesel into diesel under a feature flag?

@lnicola
Copy link

lnicola commented Oct 11, 2021

I'm all for async APIs, but I don't think integrating "fake" (e.g. not using real async IO) async helpers in diesel is worth the complexity and potential for confusion, not to mention the expansion of the testing matrix.

@weiznich
Copy link
Member

@ibraheemdev I do not see any reason why this needs to be in diesel itself. Beside of the issues mentioned by @lnicola there are these problems:

  • The transaction problem is not solved yet.
  • Integrating an async interface into diesel itself does mean that we cannot easily change the API afterwards due to diesels stability guarantee. (At least not without a major version bump).
  • It adds an quite large amount of complexity to diesel that needs to be maintained. I personally do currently not have the capacity to do that. That might change if more people start helping maintaining diesel and contribute fixes/features.

Because of these reasons I would prefer to keep it outside of diesel. This forces the work of maintaining this code on other persons + allows to figure out a good API by experimentation. What I'm willing to provide any basic building blocks in diesel that are required to build such a crate + I happily will answer diesel related questions coming during development.

@Sytten
Copy link
Contributor

Sytten commented Oct 11, 2021

Not sure why the need for personal attacks. Rereading my comments I specifically didn't ask for it to be implemented by anybody. I just pointed out a common use-case for ORM in an environment that supports async. Many popular projects like SQLAlchemy in python made the decision to support both interface once the language started to support async officially. If thats not a priority for you, let the people who need it implement it and move on with you life.
Thanks for the existing work, I do appreciate it.

@dyedgreen
Copy link

I’ll take this opportunity to plug bb8-diesel again, which provides an async connection pool for diesel; (it also wraps the connection types, such that all blocking connections are correctly flagged to the async runtime). I’ve used it before for multiple projects and found it just works. Unless you want/ need the underlying DB connections to be async, I see no reason why this would need to be supported within diesel 👍

@weiznich
Copy link
Member

Not sure why the need for personal attacks. Rereading my comments I specifically didn't ask for it to be implemented by anybody

@Sytten Thats because of this statement:

As to if diesel wants to support this use-case or not is entirely up to you @weiznich :)

In the context of this thread and your previous postings here I interpret this statement as request to implement it because you need it for your use case. If that's not what you wanted to say then I misread that statement.

Many popular projects like SQLAlchemy in python made the decision to support both interface once the language started to support async officially.

Python and rust are quite different, especially in terms of performance characteristics. So something that is required in python to achieve good performance is probably not required in rust. That's the reason why I'm asking for numbers.

If thats not a priority for you, let the people who need it implement it and move on with you life.
Thanks for the existing work, I do appreciate it.

Thats something I've suggested over and over in this thread again, but for some reason people just coming back here and continue to complain that it is not implemented yet. Most of them seem to be unwilling to put the required work in, they just want to take it for free.

@lnicola
Copy link

lnicola commented Oct 11, 2021

I think one of the more constructive things that can be done here is to implement a "proper" benchmark of diesel with blocking vs. e.g. tokio-postgres. I disagree that those in #399 (comment) are relevant because they make one future and block_on it, leading to a no real concurrency. A better one could be comparing actix-diesel and actix-pg from https://www.techempower.com/benchmarks/#section=data-r20&hw=ph&test=fortune (source code is available at https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Rust).

To me, it looks like the async version is indeed faster, but both are performing quite admirably, especially in comparison to Django, Rails, Spring or whatever people coming from other languages will be used to. In the end, there are a lot of options. If you like diesel or have existing code you want to integrate in a modern web framework, spawn_blocking is the way to go. If you want a pure async solution, use sqlx or tokio-postgres. If you're not going to reach 100k requests/sec (and realistically, most projects won't ever see more than 100 rps), use whatever you like better.

To be honest, I don't even like ORMs that much 😅 so I don't have any bone to pick here, but I don't think it's such a tragedy if diesel remains sync-only for a while.

@ghost
Copy link

ghost commented Oct 11, 2021

I think it'll be useful for everybody wanting async to read this comment.

I arrived here on the search for native async diesel implementation, not 3rd party. I was confused why it did not exist yet so I read every comment. I now agree that Diesel should not be async right now.

I want async but if the benefit is not significant then there's no point. I think that's how every software developer looks at a problem. No one has shown any evidence that sync Diesel to be significantly bad compared to any async alternative now. This tells me that async at this time may be overrated for use cases people have now. If you use any of the async libraries mentioned, you've already solved your problem.

I think everybody likes Diesel because it is a sound library. If a sound library doesn't implement a popular feature, there must be a good reason. The reason is async solutions today are not sound or too complex - if it isn't then please make a pr. The closest thing to a real solution is this getting solved rust-lang/rust#62290 or somebody hacking this https://github.com/tokio-rs/io-uring.

I like sound solutions. I'd prefer sound async instead of async with gotchas.

If you absolutely wanted the best async possible today, I don't think you'd be using Diesel right now. If you want a convenient ORM that's good enough for broad range today including some async, use Diesel.

@weiznich
Copy link
Member

I think one of the more constructive things that can be done here is to implement a "proper" benchmark of diesel with blocking vs. e.g. tokio-postgres. I disagree that those in #399 (comment) are relevant because they make one future and block_on it, leading to a no real concurrency.

I totally understand that our own benchmarks are not perfect by any means. That's the reason why I've ask for different implementations, but nobody seems to border. There cannot be any concurrency in those benchmarks as you cannot execute more than one statement per database connection at the same time. That's another reason why I believe that async database connections are not that important for performance. Instead use a async connection pool as bb8.

A better one could be comparing actix-diesel and actix-pg from https://www.techempower.com/benchmarks/#section=data-r20&hw=ph&test=fortune (source code is available at https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Rust).

I don't believe those are that relevant either, as both benchmarks use different queries (note: Many updates for diesel vs one update for tokio-postgres) + have different handling for prepared statements (the tokio-postgres variant caches the prepared statement for all queries, diesel does only cache statements that are generally safe to cache. For example update statements are not cached) + both benchmarks use different actix configurations. (I do not have any in-depth actix knowledge, so some of this could be due to different requirements, but some of those options seem to be relevant, like the timeout thing)
If someone wants to improve that this could serve as benchmark, althoug I would prefer having something that does not depend on a web framework. (A web framework adds too much options to tweak in my opinion)

To me, it looks like the async version is indeed faster, but both are performing quite admirably, especially in comparison to Django, Rails, Spring or whatever people coming from other languages will be used to. In the end, there are a lot of options. If you like diesel or have existing code you want to integrate in a modern web framework, spawn_blocking is the way to go. If you want a pure async solution, use sqlx or tokio-postgres. If you're not going to reach 100k requests/sec (and realistically, most projects won't ever see more than 100 rps), use whatever you like better.

That's basically that point I've written above quite often already 😉. Most people don't need to use async/await at all as they won't see the traffic where this really is necessary.

@tema3210
Copy link

(There I assume my previous attempt on the topic to be true)
As an alternative we can make a user to do a transaction via macro:

...
diesel::async_transaction!(conn,|closure|{body}).await;
...

I except this to turn into this:

{ async {
    let mut conn: &mut Connection = conn; //moving in a reference, ensuring the type.
   conn.transaction().await?.perform(|ref mut closure|{body}).await //note a ref mut binding mode, it guarantees that user cannot "leak" our connection.
}}

Is it both an ergonomic API and safe API?

@weiznich
Copy link
Member

@tema3210 I'm not sure I understand how this does solve the problem. The underlying problem with the transaction API is that we cannot really restrict the closure to the right lifetime in terms of the returned future. As far as I the closure remains in place for your proposed solution, so that does not really solve the problem.

@weiznich
Copy link
Member

I've published a first preview diesel-async. It provides a drop in replacement of diesels RunQueryDsl and Connection interface for async usage. The crate provides pure async Connection implementations for the mysql and postgres backend based on mysql_async and tokio-postgres.
I've chosen to implement this as extern crate for now to being able to decouple the version from diesel itself later. This allows us to improve the API before committing to a stable release. I will likely upload a first version to crates.io after the diesel 2.0 release as the implementation depends on quite a lot new API's.

@dlight

This comment has been minimized.

@weiznich

This comment has been minimized.

@treysidechain
Copy link

Love the progress being made in diesel-async @weiznich! I wrote up deadpool-diesel-async on top of your work for pooling those sweet sweet async diesel connections. Heads up, I haven't written much async rust library code so it could use a healthy dose of scrutiny

@Ten0
Copy link
Member

Ten0 commented Oct 5, 2022

The best we came up with yet is this playground

(#399 (comment))

Food for thought/analysis/experiment: https://github.com/danielhenrymantilla/higher-order-closure.rs

@Ten0
Copy link
Member

Ten0 commented Dec 13, 2024

Async closures are getting stabilized in the next release and that may help with the issue mentioned above as well. (not tested)

@weiznich
Copy link
Member

weiznich commented Dec 13, 2024

@Ten0 I'm aware of that. Testing that would essentially only mean to replace the usage of ScopedBoxFuture here with the new language feature.

This will likely solve the boxing issue there and make using the callback based transaction approach more usable. It still doesn't address the issues around cancellation. Given that there is not even a language level consent of whether and how something like AsyncDrop or any other solution to this problem should work I'm not optimistic to see an solution for this problem soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests