-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[🐛 Bug]: Relaytype node has memory leak? #13643
Comments
@parholmdahl, thank you for creating this issue. We will troubleshoot it as soon as we can. Info for maintainersTriage this issue by using labels.
If information is missing, add a helpful comment and then
If the issue is a question, add the
If the issue is valid but there is no time to troubleshoot it, consider adding the
If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C),
add the applicable
After troubleshooting the issue, please add the Thank you! |
The memory usage goes up to 3+Gb on an idle node.. Isn't that strange? Our "regular nodes" does not have this sawtooth pattern on memory and cpu usage, not when they are used nor when they are idle.. |
@parhedberg i think the regular node had this before b22d08d Some of these optimizations could be applied to the RelayOptions too, e.g. using the same Regarding the 3+GB in idle, i think a full GC happens when a certain percentage of memory consumption is reached. |
@parhedberg I found a httpclient which was not explicit closed, this might be cause for the slow release of memory. |
Redeployed our relay node with the nightly build now, so i will check the status in a day or two! (and yes, i see now that i use both my old and my new account in this thread.. Sorry for that)... |
After 24 hours, memory and cpu usage looks more natural for the Relay node, as our ordinary nodes do. Great work! |
This issue has been automatically locked since there has not been any recent activity since it was closed. Please open a new issue for related bugs. |
What happened?
We are running a couple of nodes in kubernetes. A couple of months ago we also set up a relay node, to relay traffic to browserstack from our ordinary grid. After some time when we start to examin what the pods are doing, we see some strange behaviour on memory and cpu, that we don't se in an "ordinary" type of node, just this one that is relay type.
This is picture from CPU and Memory utilisation on docker;
This is on days when the relay node is not used at all.. The graph starts when we restarted the relay node, and from that day we have not run ANY tests on that node.
How can we reproduce the issue?
Relevant log output
Operating System
Kubernetes v1.27.10+k3s2
Selenium version
4.17
What are the browser(s) and version(s) where you see this issue?
NA
What are the browser driver(s) and version(s) where you see this issue?
NA
Are you using Selenium Grid?
4.17
The text was updated successfully, but these errors were encountered: