-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool::acquire times out #622
Comments
I've ran into the same issue. I am not seeing connections recycled when they are dropped, which is causing issues when multiple queries happen on one web request/future. I have worked around it by using Here's my (very simple) implementation. (also note: I needed to use use std::ops::DerefMut;
use sqlx::{Connect, Connection, Error as SqlxError, Executor, PgConnection};
use async_trait::async_trait;
use deadpool::managed::{Manager, RecycleResult};
use log::*;
type Pool = deadpool::managed::Pool<PgConnection, SqlxError>;
struct DbPool {
url: String,
}
impl DbPool {
fn new(url: String, size: usize) -> Pool {
Pool::new(DbPool { url }, size)
}
}
#[async_trait]
impl Manager<PgConnection, SqlxError> for DbPool {
async fn create(&self) -> Result<PgConnection, SqlxError> {
PgConnection::connect(&self.url).await
}
async fn recycle(&self, obj: &mut PgConnection) -> RecycleResult<SqlxError> {
Ok(obj.ping().await?)
}
}
async fn main() {
...
let pool = DbPool::new(url, 16);
...
sqlx::query_as("select * from users where email = $1")
.bind(&email)
.fetch_one(pool.get().await?.deref_mut())
.await?;
...
} |
Thanks for sharing your workaround! deadpool looks great. |
Currently the pool doesn't behave well if the @skuzins in your example, could you please try debug-printing the pool after the second if let Err(e) = pool.acquire().await {
println!("acquire error: {}, pool state: {:?}", e, pool);
} |
I'm not sure any future would/should normally be cancelled in this example - I believe they should all just run to completion. Here's the debug message, please let me know if you need more information.
|
The timeout itself cancels the internal future that's actually waiting in the queue. However, I'd expect to see |
I am not seeing cancellations in my code and the example posted doesn't have them either. It feels more like an issue with waking up after the pool has stopped using a connection. |
@abonander The thing is, there shouldn't be any timeouts. The task acquires the connection, performs a short query, drops the connection and does the same thing once more. The whole program should finish within milliseconds without any timeouts. Here's the output with
|
fixes #622 Signed-off-by: Austin Bonander <austin@launchbadge.com>
fixes #622 Signed-off-by: Austin Bonander <austin@launchbadge.com>
fixes #622 Signed-off-by: Austin Bonander <austin@launchbadge.com>
fixes #622 Signed-off-by: Austin Bonander <austin@launchbadge.com>
FWIW, I think I'm hitting this (or something like it) when using I have a loop which waits 5 seconds and calls Once the 5 seconds elapse the pool shows all connections idle, but |
We're hitting an issue that looks similar to launchbadge/sqlx#622 I was able to work around it by disabling fairness, but we can just avoid the lookup entirely if active workers is >= the pool size. Signed-off-by: Joe Grund <jgrund@whamcloud.io>
We're hitting an issue that looks similar to launchbadge/sqlx#622 I was able to work around it by disabling fairness, but we can just avoid the lookup entirely if active workers is >= the pool size. Signed-off-by: Joe Grund <jgrund@whamcloud.io>
I'm not certain this bug has been fixed, as I've been required to use the |
Yeah I think adding this periodic wake was a mistake: https://github.com/launchbadge/sqlx/blob/master/sqlx-core/src/pool/inner.rs#L180 After a single wait period it effectively makes the pool unfair again as it becomes dependent on the timing of when tasks wake and poll the future. Something I realized in 0.6 that I want to backport is that if a task is "woken" (as in its The idle reaper might also be unintentionally preempting waiting tasks, I need to look into that. |
Just spotted another bug, too; since waking a waiting task involves popping it from the Using The idle reaper is also definitely preempting some waiting tasks since it directly pops connections from the idle queue without checking if any tasks are waiting. |
* a task that is marked woken but didn't actually wake will instead wake the next task in the queue * a task that wakes but doesn't get a connection will put itself back in the queue instead of waiting until it times out with no way to be woken * the idle reaper won't run if there are tasks waiting for a connection, and also uses the proper `SharedPool::release()` to return validated connections to the pool so waiting tasks get woken closes #622, #1210 (hopefully for good this time) Signed-off-by: Austin Bonander <austin@launchbadge.com>
* a task that is marked woken but didn't actually wake before being cancelled will instead wake the next task in the queue * a task that wakes but doesn't get a connection will put itself back in the queue instead of waiting until it times out with no way to be woken * the idle reaper now won't run if there are tasks waiting for a connection, and also uses the proper `SharedPool::release()` to return validated connections to the pool so waiting tasks get woken closes #622, #1210 (hopefully for good this time) Signed-off-by: Austin Bonander <austin@launchbadge.com>
I met similiar problem, Using Codes are mentioned in #1199 (comment) I make async queries by using the |
That's awesome to hear, thanks! |
I'm also running into this issue with a somewhat silly usecase of hundreds-to-low-thousands tokio tasks running small write queries against a PgPool. It works for a couple of minutes, and then buckles with a Edit: |
#1211 fixed my issue thanks @abonander |
* a task that is marked woken but didn't actually wake before being cancelled will instead wake the next task in the queue * a task that wakes but doesn't get a connection will put itself back in the queue instead of waiting until it times out with no way to be woken * the idle reaper now won't run if there are tasks waiting for a connection, and also uses the proper `SharedPool::release()` to return validated connections to the pool so waiting tasks get woken closes #622, #1210 (hopefully for good this time) Signed-off-by: Austin Bonander <austin@launchbadge.com>
Our external cloud database doesn't seem to be able to make do with just one. It does seem that there is a sqlx bug making this worse: launchbadge/sqlx#622 But not a lot of options for now, and a large connection pool is good anyway.
Our external cloud database doesn't seem to be able to make do with just one. It does seem that there is a sqlx bug making this worse: launchbadge/sqlx#622 But not a lot of options for now, and a large connection pool is good anyway.
Our external cloud database doesn't seem to be able to make do with just one. It does seem that there is a sqlx bug making this worse: launchbadge/sqlx#622 But not a lot of options for now, and a large connection pool is good anyway.
Our external cloud database doesn't seem to be able to make do with just one. It does seem that there is a sqlx bug making this worse: launchbadge/sqlx#622 But not a lot of options for now, and a large connection pool is good anyway.
I have a minimal repro of the timeout issue in async. This might be a wrongful usage, please tell me if that's the case! Cargo.toml
main.rs
|
@fjoanis-legion That |
Locking this issue to prevent further necroing. If anyone is encountering sporadic timeouts on SQLx 0.6.0, please open a new issue. |
I'm using a connection pool with a larger number of tokio tasks, see the example below. The tasks run a small query, do some work without holding on to a connection, then run another query. In the example, the second call to acquire quickly starts failing with PoolTimeout or it takes too long, roughly connect_timeout seconds.
Any ideas would be appreciated.
Output snippets:
MySQL Engine version 8.0.17. Tested with stable-x86_64-unknown-linux-gnu and stable-x86_64-pc-windows-msvc.
The text was updated successfully, but these errors were encountered: