-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve the throughput of SocketsHttpHandler's HTTP/1.1 connection pool #99364
Conversation
Tagging subscribers to this area: @dotnet/ncl Issue DetailsCloses #70098 The connection pool currently manages the list of available connections and the requests queue under a single lock. This PR brings the fast path (there are enough connections available to process all requests) down to a Numbers from #70098 (comment):
This shows that before this PR, manually splitting load between multiple HttpClient instances can have a significant impact. YARP's http-http 100 byte scenario:
In-memory loopback benchmark that stresses the connection pool contention: https://gist.github.com/MihaZupan/27f01d78c71da7b9024b321e743e3d88 Rough RPS numbers with 1-6 threads:
Breaking change consideration
|
/azp run runtime-libraries-coreclr outerloop |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run runtime-libraries stress-http |
Azure Pipelines successfully started running 1 pipeline(s). |
src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs
Show resolved
Hide resolved
src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs
Outdated
Show resolved
Hide resolved
src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs
Show resolved
Hide resolved
src/libraries/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs
Outdated
Show resolved
Hide resolved
Using a stack here is close enough (in benchmarks the collection is going to be close to empty all the time, so contention between the stack and queue is similar). I'll switch the PR to use that to avoid the behavioral change. It does mean an extra 32 byte allocation for each enqueue op sadly (+1 for #31911).
|
9c27a09
to
5db7ecd
Compare
/azp run runtime-libraries-coreclr outerloop |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run runtime-libraries stress-http |
Azure Pipelines successfully started running 1 pipeline(s). |
Closes #70098
The connection pool currently manages the list of available connections and the requests queue under a single lock.
As the number of cores and RPS rise, the speed at which the pool can manage connections becomes a bottleneck.
This PR brings the fast path (there are enough connections available to process all requests) down to a
ConcurrentStack.Push
+ConcurrentStack.TryPop
.Numbers for ConcurrentQueue
Numbers from #70098 (comment):
This shows that before this PR, manually splitting load between multiple HttpClient instances can have a significant impact.
After the change, there's no more benefit to doing that as a single pool can efficiently handle the higher load.
YARP's http-http 100 byte scenario:
In-memory loopback benchmark that stresses the connection pool contention: https://gist.github.com/MihaZupan/27f01d78c71da7b9024b321e743e3d88
Rough RPS numbers with 1-6 threads:
Breaking change consideration- This is no longer relevant after switching toConcurrentStack
While I was careful to keep the observable behavior of the pool as close as possible to what we have today, there is one important change I made intentionally:The order in which we dequeue idle connections is changed from LIFO to FIFO (from a stack to a queue). This is because the backing store for available connections is now aConcurrentQueue
.Where this distinction may be important is if a load drops for a longer period such that we no longer need as many connections. We would previously keep the overhead connections completely idle and eventually remove them via the idle timeout. With this change, we would keep cycling through all connections, potentially keeping more of them alive.A slight benefit of that behavior may be that it makes it less likely to run into the idle close race condition (server closing an idle connection after we've started using it again).See #99364 (comment) for
ConcurrentStack
results (current PR).