-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'acquired' channels not decremented in some timeout scenarios #9448
Comments
When there was a read timeout (i.e. timeout handled by DefaultHttpClient) while the Mono<PoolHandle> from ConnectionManager was not yet complete, the PoolHandle would be dropped silently. This patch handles cancellation of the Mono properly, releasing the pool handle. This is already fixed by the ConnectionManager rework in 4.0. Fixes #9448
When there was a read timeout (i.e. timeout handled by DefaultHttpClient) while the Mono<PoolHandle> from ConnectionManager was not yet complete, the PoolHandle would be dropped silently. This patch handles cancellation of the Mono properly, releasing the pool handle. This is already fixed by the ConnectionManager rework in 4.0. Fixes #9448
thanks for the report, i've made a pr to fix it. please don't block the event loop like in your example though, netty does not like it :) |
Awesome! Thanks! Haha, yes. Blocking purely to create conditions reproduce the problem. |
When there was a read timeout (i.e. timeout handled by DefaultHttpClient) while the Mono<PoolHandle> from ConnectionManager was not yet complete, the PoolHandle would be dropped silently. This patch handles cancellation of the Mono properly, releasing the pool handle. This is already fixed by the ConnectionManager rework in 4.0. Fixes #9448 Co-authored-by: Sergio del Amo <sergio.delamo@softamo.com>
That's perfect, I just wanted to report this problem. |
@yawkat I am also wondering when this will be in a release. We are currently holding off upgrading until this fix is available. |
im not sure, 4.0 will probably be released before and 4.0 never had this issue |
Expected Behavior
Acquired count should reflect channels in use in all cases.
Actual Behaviour
In scenarios where the client has a fixed channel pool and it is backed up we see the 'acquired' channel tracking can 'leak' meaning it is incremented and never comes down causing all subsequent requests to time out waiting for a channel.
Steps To Reproduce
See example application:
The linked project reproduces an issue where the 'acquired' channel count maintained by the FixedChannelPool for a client can not be decremented in the case the netty client threads are unresponsive (seems to correlate with when io.micronaut.http.client.exceptions.ReadTimeoutException is thrown)
the MicronautAcquireLeakTest reproduces the issue and the following conditions
The test executes like this:
Environment Information
mac M1 os 13.4
Micronaut version 3.9.3
openjdk 11.0.18 2023-01-17 LTS
OpenJDK Runtime Environment Zulu11.62+17-CA (build 11.0.18+10-LTS)
OpenJDK 64-Bit Server VM Zulu11.62+17-CA (build 11.0.18+10-LTS, mixed mode)
Example Application
https://github.com/dhofftgt/micronaut-acquire-leak
Version
3.9.3
The text was updated successfully, but these errors were encountered: