-
Notifications
You must be signed in to change notification settings - Fork 38.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jetty HttpConnections not closed with SSE on Jetty 12.0.7+ when clients close the connection #32629
Comments
Thanks for the sample @vakiovale - the fact that only changing the Jetty version pointed to a bug in Jetty, or a bug in our Jetty support. I confirmed that:
This is a perfectly valid use case. There are some runtime differences, but we do support this use case. DebuggingI've tried running the latest 12.0.9-SNAPSHOT version of Jetty, because a connection leak was recently fixed. <properties>
<java.version>17</java.version>
<jetty.version>12.0.9-SNAPSHOT</jetty.version>
</properties>
<repositories>
<repository>
<id>jetty-snapshots</id>
<url>https://oss.sonatype.org/content/repositories/jetty-snapshots/</url>
<snapshots><enabled>true</enabled></snapshots>
</repository>
</repositories> When the connection is closed from the client side, we're seeing the following:
Nothing looks suspicious to me, as the Later, connections are reclaimed when the idle timeouts are triggered:
In short, I don't have a clear understanding of the behavior difference right now. Maybe we should try to create a repro case that doesn't involve the Spring MVC stack. |
I managed to track down the behavior difference between Jetty 12.0.3 - 12.0.4+ with Spring MVC and SSE events. When Spring MVC produces SSE events asynchronously, events are written to the Servlet response and flushed. With Jetty 12.0.3, this would be caught here and the With Jetty 12.0.4+, Unfortunately, I could not reproduce this reliably with a minimal Jetty sample and I'm not familiar enough with Jetty's connection lifecycle. I'm sorry to ping you here @gregw, but do you have any insight or advice here that could help me craft a reproducer for Jetty? Or is this a bug in Spring MVC in your opinion? I did debug the repro app with Jetty 12.0.3 and the stacktrace was the following up to the line that aborted the servlet channel :
|
Hey, thanks @bclozel for your help with this issue.
We also observed this same behavior.
This observation made me question whether I had made an error, as I did not notice any cleanup activity from the timeout handler. Below, I can provide examples of what I observed. It seems that the timeout handler continuously loops (every 30 seconds for each client cancellation), yet the HttpConnection objects are not being cleared from memory. Test runsMy test scenario for versions 12.0.3, 12.0.7, and 12.0.9-SNAPSHOT involved the following steps:
Test with Jetty 12.0.3After cancelling request:
Taking a heap dump with version 12.0.3, you will notice that the Test with Jetty 12.0.7
You can see the Checking the logs 1.5 minutes later, we can see that the timeout handler loops every 30 seconds:
Screenshot of the memory dump shows that Test with Jetty 12.0.9-SNAPSHOTThis looks pretty much the same as with 12.0.7, but I'll show the logs and screenshots here as well:
And the same problem occurs after timeouts as well. Extra test with 12.0.9-SNAPSHOT: I canceled 12 requests consecutively, and the timeout loop never ends. Even over 30 minutes later, we still observe the same pattern in the logs and in the memory dump.
|
@vakiovale I think the timeout handler issue is completely separate and should be raised to the Jetty team separately. I had a discussion with @rstoyanchev about this issue and we think that we might commit a fix in Spring Framework for this. Also, we are wondering if Jetty's behavior is consistent in this case. In a typical asynchronous Servlet write scenario, an We are relying on the |
@bclozel Here is my analysis of this problem:
In this particular case, my understanding is that you've started an async context and returned from So it looks to me like this is a bug in Spring: when you get an exception from the WDYT? |
Thanks for the analysis @lorban. Tomcat in particular calls |
Yes, it is intentional. Jetty doesn't call Though, I'm happy to discuss this if you believe this is wrong. |
Okay, good to know. I'll schedule the issue. Aside from that, it would be useful to know Jetty's reasons for not calling |
Description
Our Spring Boot 3.2.2 application experiences a memory leak leading to an
OutOfMemoryError
, with persistentorg.eclipse.jetty.server.internal.HttpConnection
objects found in the heap dump. This occurs when using Server-Sent Events (SSE) with Flux endpoints and is particularly evident after anEofException
is thrown by Jetty after the client disconnects but the server attempts to continue emitting events. The issue has been replicated in a minimal project to demonstrate the behavior under specific configurations and versions.Link to the example project on GitHub
The example application uses Spring Boot 3.2.4 (and Jetty 12.0.7) and has a
Flux<ServerSentEvent>
endpoint emitting heartbeats for a duration of 10 seconds.Steps to Reproduce
curl
to connect to the SSE endpoint atlocalhost:9898/bug/hunt
and then terminate the connection manually using Ctrl+C.HttpConnection
instances remain in memory, especially afterEofException
occurs due to attempted heartbeat emissions.Example
Doing this twice:
Taking a heap dump from the application (using IntelliJ IDEA's memory snapshot) and observing two instances of
HttpConnection
:Expected Behavior
HttpConnection instances should be cleared from memory when the client disconnects, preventing any memory leaks.
Possible Solution
spring-boot-starter-web
or setting the application toWebApplicationType.REACTIVE
also mitigates the issueAdditional Context
spring-boot-starter-web
together with reactive streams (like Flux) for SSE might be inherently problematic. We have simultaneously usedspring-boot-starter-web
andspring-boot-starter-webflux
in our project, utilizingFlux<ServerSentEvent>
endpoints without any issues. However, after upgrading our production environment from Spring Boot 3.0.5 to 3.2.2, we encountered this problem.Thank you for your attention to this matter. I am more than happy to provide further details or additional information if needed.
The text was updated successfully, but these errors were encountered: