-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak / unlimited caching kills the server #2372
Comments
Since postgres 15.2 my application grows in terms of memory consumption (2.5 GB) |
It's not a leak but it is perhaps suboptimal behavior. It has nothing to do with prepared statement caching; it's simply because we push whole messages to the stream's write buffer, growing it if necessary, but once those messages are flushed the excess capacity is retained because it's likely for that capacity to be needed again soon. Ensuring we always push whole messages to the buffer helps prevent issues with cancellation, as we should never send a truncated or garbled message that might confuse the server. However, if you're just using the pool directly it's unlikely that it will use the same connection twice in a row, so you have the issue of duplicating this excess allocation on potentially every query. That's likely why you're hitting OOM conditions. As a temporary fix, you could let mut conn = pool.acquire().await?;
sqlx::query!("INSERT INTO test_table VALUES ($1)", data)
.execute(&mut *conn)
.await?;
conn.detach().close().await?; This may be error-prone if you're running with You could also push smaller chunks with separate queries in a transaction. You could use It's a shame we don't have an API for large objects because I think that'd be the ideal thing to use here. You could use the server-side functions though. As for a real solution, I'm not sure. We certainly could be smarter about how we manage the connection buffers, perhaps tracking the actual capacity we're using over time and shrinking it if it's not being used. That logic could probably live here, though we likely don't want to do that every flush as that could cause a lot of thrashing in the allocator. We could add a The buffering itself could be smarter such that we don't push whole messages to a single |
@abonander Thanks for detailed and interesting response!
Yes, it was my first workaround but I didn't like the changed app code.
I was "playing" with the code in I've tested my app with slightly modified code in that file: // ...
impl<'a, S> Drop for WriteAndFlush<'a, S> {
fn drop(&mut self) {
let buf_vec = self.buf.get_mut();
// clear the buffer regardless of whether the flush succeeded or not
buf_vec.clear();
buf_vec.shrink_to(512);
}
} and so far it seems to work as expected 👍 If anybody want to test it, the change is in my fork, in the branch created from the tag 0.6.2: [patch.crates-io]
sqlx-core = { git = "https://github.com/MartinKavik/sqlx", branch = "fix/buf_limit_512_from_v062" } I can image I/we can just add that |
I'm adding a The default buffer size is |
I don't want to hardcode a shrink call to the buffer every time it's flushed because that could cause a lot of unnecessary thrashing in the global allocator, it could especially cause fragmentation issues in an allocator like |
Bug Description
The connection (pool) "remembers" the query with the biggest memory size. It means when you insert some binary data (e.g. 100 MB) then it doesn't free the memory even though the data are already stored in the database.
I've tried to call
persistent(false)
or set various settings likemax_lifetime
but nothing really helped.I've noticed the bug because my Rust app that uses sqlx through SeaORM was crashing on the server with the error "out of memory". The app uploads temporary binary data (~1-40MB) to Postgres and it allocates hundreds of MBs because of the sqlx bug until I set the maximum number of connections to a lower number with aggressive short lifetime and min_connections 0.
I've tried to debug it by myself by no luck unfortunately.
Minimal Reproduction
https://github.com/MartinKavik/sqlx-memory-debug
(instructions + a screenshot in the README)
Info
rustc --version
: rustc 1.67.1 (d5a82bbd2 2023-02-07)Thank you!
The text was updated successfully, but these errors were encountered: