-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java-buildpack should provide the configuration to minimize memory fragmentation and JVM process RSS memory usage #163
Comments
Considering the comment made here; #159 (comment) , the changes coming to support configured initial memory sizes (#200), the resolution for the Tomcat HTTP connectionTimeout (#158) and that we still intend to add support for specifying the maximum number of threads (#157). Could you update your requirements/concerns in this issue. Would changes and issues elsewhere cover this issue now? I'd just like to get an update on your thoughts here before deciding how to proceed. |
Considering that the issues related to the 4 points raised in the original post have now been resolved/implemented I'm going to close this issue off. |
The default behavior is 8x the number of detected CPUs . As Cloud Foundry typically uses large host machines with smaller containers, the numbers are way off. This often leads to high native memory usage, followed by a cgroup OOM killer event We go with Heroku's recommendation of lowering to a setting of 2 for small instances and grow larger linearly with memory. References: - cloudfoundry/java-buildpack#163 - https://devcenter.heroku.com/articles/testing-cedar-14-memory-use - cloudfoundry/java-buildpack#320
The default behavior is 8x the number of detected CPUs . As Cloud Foundry typically uses large host machines with smaller containers, and the Java process is unaware of the difference in allocated CPUs, the numbers are way off. This often leads to high native memory usage, followed by a cgroup OOM killer event. We go with Heroku's recommendation of lowering to a setting of 2 for small instances and grow larger linearly with memory. References: - cloudfoundry/java-buildpack#163 - https://devcenter.heroku.com/articles/testing-cedar-14-memory-use - cloudfoundry/java-buildpack#320
The default behavior is 8x the number of detected CPUs . As Cloud Foundry typically uses large host machines with smaller containers, and the Java process is unaware of the difference in allocated CPUs, the numbers are way off. This often leads to high native memory usage, followed by a cgroup OOM killer event. We go with Heroku's recommendation of lowering to a setting of 2 for small instances. We also larger linearly with memory to be more in line with the default setting in Mendix Cloud v3. References: - cloudfoundry/java-buildpack#163 - https://devcenter.heroku.com/articles/testing-cedar-14-memory-use - cloudfoundry/java-buildpack#320
The default behavior is 8x the number of detected CPUs . As Cloud Foundry typically uses large host machines with smaller containers, and the Java process is unaware of the difference in allocated CPUs, the numbers are way off. This often leads to high native memory usage, followed by a cgroup OOM killer event. We go with Heroku's recommendation of lowering to a setting of 2 for small instances. We also grown the setting linearly with memory to be more in line with the default setting in Mendix Cloud v3. References: - cloudfoundry/java-buildpack#163 - https://devcenter.heroku.com/articles/testing-cedar-14-memory-use - cloudfoundry/java-buildpack#320
There is a general problem with JVM resident memory usage. The memory usage grows slowly because of allocated memory fragmentation even the application doesn't have a memory leak and there isn't a native memory leak in the application. In most applications this memory usage creep is small, but for some applications, this can be a major problem.
In CloudFoundry, the process gets killed by the OOM killer when it hits it's resource limit.
The memory fragmentation of the "native" (non-heap) memory allocated by the JVM can be minimized with configuration.
java-buildpack
should provide the configuration to minimize memory fragmentation.I have previously created these issues/pull requests that are related to this problem:
I have been running the application in production without doing changes to the application. After using a buildpack with #159 changes, the RSS of the process doesn't grow as fast as without this change. This change solely didn't help prevent the process going over the container memory limit. The uptime was longer.
After applying #160, the
MALLOC_ARENA_MAX
change, I didn't notice a significant difference in uptime compared to the plain #159 change. This lead me to search for more sources of JVM native memory fragmentation.The uptime is longest for what I've measured for now, by the time writing this I've got 32 hour uptime in the production app after adding some these parameters:
It looks like these settings do reduce memory fragmentation overhead. The default for
CodeCacheExpansionSize
is 64k . That might be causing fragmentation.It might cause differences in how the HotSpot compiler behaves if
InitialCodeCacheSize
is set to a high value so there might be tradeoffs involved.I also added a limit for
MaxDirectMemorySize
andCompressedClassSpaceSize
, however I think these don't help in reducing memory fragmentation:The text was updated successfully, but these errors were encountered: