-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebSocket server outgoing message queue memory growth #4824
Comments
Are you paying attention to the results of the It is up to the Application to not overwhelm the async write as it's non-blocking and queues messages. |
We are logging the result in WriteCallback but do not track a pending count. CPU was under 15% and network bandwidth was not saturated from our server's perspective. Is this queue for the entire server or is it per session? If it is a single session, I could track the outgoing count for a connection and kill it if it becomes too backed up. |
It is per WebSocket Session. There is no exposed API for accessing the queue depth either before the extension stack or after. Many projects using the Async modes in Jetty WebSocket (or even the javax.websocket API or newer jakarta.websocket API) have the same requirements.
|
Okay, the heap dump showed over 300k messages pending in that ExtensionStack, so I assume that was likely just a single slow client connection? We operate a "fire and forget" outgoing notification system, so I think tracking the pending count per session and dropping messages if the count gets too high is a reasonable approach. We have similar logic at other levels of the stack. |
300k is a lot, and it could be from a single slow client if you don't pay attention to the async results. Have you considered a more robust library for notifying clients like cometd (with websocket support)? It manages the write queue for you. @gregw @sbordet what do you think about putting a (configurable in the session) upper limit on the writeflusher queue in websocket? |
We have many 3rd party clients that use our platform, so even if we added support for a new standard on the server side we would have to support both implementations for quite a while. We eventually hope to move to a managed MQTT solution, but that is likely a ways off as well. I found this issue stating that a websocket async send timeout was implemented in Jetty 10, but not 9.4: #847 Something like that would be quite useful here. Could that be brought back to 9.4? Also, is the OP of that ticket correct when he claims that the WriteCallback is called before the message is actually sent to the client? If so, monitoring those callbacks won't give a good indication of the actual queue length. |
This is my current problem as well. In all examples I found (including cometd) async call immediately followed by future.get() (how is it better than blocking?) or ignore future/callback all together (causing internal queue to grow unlimitedly). |
Every time you use the send methods that async send timeout resets. |
The callback success is called once it's successfully written to the network. After that, all of the normal TCP behaviors apply. This is not unique to WebSocket, this is common/typical behavior for all TCP based communications. |
TCP ACK is layer 4. |
Got it, so Java's TCP layer will eventually apply back pressure when its own queue fills up which will cause Jetty's WriteCallback to be delayed as well. That's what I hoped but wanted to confirm. I should be able to try this out tomorrow. Thanks for all your help. |
Thanks for such a great great response @joakime
|
Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
Here's a testcase showing the websocket backpressure. The output shows you how the backpressure behaves. |
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
@joakime I'm not really sure it is correct to fail frames without failing the connection. What if I sent three frames it drops the middle one and then the other side sees only the first and last frame? But I think this would be pretty simple to implement. I had a go at implementing this in |
Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
…rceptor" This reverts commit a21e833 Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
Signed-off-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
…rames Issue #4824 - add configuration on RemoteEndpoint for maxOutgoingFrames
We have added configuration on This can be set with |
Jetty version
9.4.27.v20200227
Java version
OpenJDK 11.0.3
OS type/version
Amazon Linux 2
Description
We have observed excessive JVM heap growth twice in the last couple days, and a heap dump shows the majority of the growth is due to a backup of outgoing Jetty WebSocket FrameEntries stored in a single org.eclipse.jetty.websocket.common.extensions.ExtensionStack (see heap dump screenshot attached). We are calling the
RemoteEndpoint.sendString(String text, WriteCallback callback)
method to send messages asynchronously to connected websockets. Are there any known scenarios that could cause this to happen, and is there anything we can do to prevent it?The text was updated successfully, but these errors were encountered: