-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Polish for fixes from #223 and #224 #226
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #226 +/- ##
==========================================
+ Coverage 80.03% 81.50% +1.47%
==========================================
Files 6 6
Lines 621 665 +44
==========================================
+ Hits 497 542 +45
+ Misses 124 123 -1 ☔ View full report in Codecov by Sentry. |
5cc28aa
to
33b0bf0
Compare
The reaper only runs against the connections in its idle pool. This is fine for reaping idle connections, but for hotly contested connections beyond their maximum lifetime this can prove problematic. Consider an active connection beyond its lifetime and a reaper that runs every 3 seconds: - [t0] Connection is idle - [t1] Connection is active - [t2] Reaper runs, does not see connection - [t3] Connection is idle This pattern can repeat infinitely with the connection never being reaped. By checking the max lifetime on drop, we can ensure that expired connections are reaped in a reason amount of time (assuming they eventually do get dropped).
It's possible to trigger more approvals than are necessary, in turn grabbing more connections than we need. This happens when we drop a connection. The drop produces a notify, which doesn't get used until the pool is empty. The first `Pool::get()` call on an empty pool will spawn an connect task, immediately complete `notify.notified().await`, then spawn a second connect task. Both will connect and we'll end up with 1 more connection than we need. Rather than address the notify issue directly, this fix introduces some bookkeeping that tracks the number of open `pool.get()` requests we have waiting on connections. If the number of pending connections >= the number of pending gets, we will not spawn any additional connect tasks.
@@ -252,6 +252,11 @@ impl<C: Send> Conn<C> { | |||
birth: Instant::now(), | |||
} | |||
} | |||
|
|||
pub(crate) fn is_expired(&self, now: Instant, max: Duration) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly, your earlier formulation of this failed on Windows due to an overflow. You changed now - conn.conn.birth >= lifetime
to (effectively) self.birth <= (now - lifetime)
, but this failed CI on Windows with overflow when subtracting duration from instant
on line 155 (which did the now - lifetime
substraction). Changed this to still extract the method, but take 2 arguments so we can keep the earlier formulation.
(Don't know if AWS does any sponsoring of OSS projects, but would sure be nice to get some compensation for ongoing maintenance of this crate.) |
Thanks for getting this over the finish line! |
@tneely thanks for your efforts so far! I have some nits to pick with your PRs but figured it might be more expedient (especially given the ~24-hour RTT) if I just took care of them myself. Will retain your authorship and make minor changes.