Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

don't set -XX:MetaspaceSize= setting since it can prevent cleanup of allocated native memory #159

Closed
wants to merge 1 commit into from

Conversation

lhotari
Copy link

@lhotari lhotari commented Mar 6, 2015

No description provided.

@cfdreddbot
Copy link

Hey lhotari!

Thanks for submitting this pull request!

All pull request authors must have a Contributor License Agreement (CLA) on-file with us. Please sign the appropriate CLA (individual or corporate).

When sending signed CLA please provide your github username in case of individual CLA or the list of github usernames that can make pull requests on behalf of your organization.

@nebhale
Copy link
Member

nebhale commented Mar 9, 2015

@lhotari At the very least you'll need to squish your two commits into a single one. Beyond that, do you have any evidence that this change (which would be different than everything other memory setting) does actually change the cleanup behavior of the metaspace? This design exists to pre-allocate as much memory as we can, which shouldn't make any difference to an application. If the application is going to go OOM, it will weather this setting is specified or not.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

@nebhale The Rubocop fix wasn't related to the change I made. That's why I didn't squish it. I'll rebase the PR and see if the Rubocop error has been fixed else where.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

@nebhale I assume that setting MetaspaceSize will prevent GC for metaspace when you set MetaspaceSize=MaxMetaspaceSize. It doesn't change the behaviour of metaspace pre-allocation in the Java 8 JVM.

This is documented in this Oracle documentation about Class Metadata and that lead me to make these conclusions.

Class metadata is deallocated when the corresponding Java class is unloaded. Java classes are unloaded as a result of garbage collection, and garbage collections may be induced in order to unload classes and deallocate class metadata. When the space committed for class metadata reaches a certain level (a high-water mark), a garbage collection is induced. After the garbage collection, the high-water mark may be raised or lowered depending on the amount of space freed from class metadata. The high-water mark would be raised so as not to induce another garbage collection too soon. The high-water mark is initially set to the value of the command-line option `MetaspaceSize`. It is raised or lowered based on the options `MaxMetaspaceFreeRatio` and `MinMetaspaceFreeRatio`. If the committed space available for class metadata as a percentage of the total committed space for class metadata is greater than `MaxMetaspaceFreeRatio`, then the high-water mark will be lowered. If it is less than `MinMetaspaceFreeRatio`, then the high-water mark will be raised.
Specify a higher value for the option `MetaspaceSize` to avoid early garbage collections induced for class metadata. The amount of class metadata allocated for an application is application-dependent and general guidelines do not exist for the selection of `MetaspaceSize`. The default size of `MetaspaceSize` is platform-dependent and ranges from 12 MB to about 20 MB.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

@nebhale Now it's a single commit after rebasing it on the latest master.

@nebhale
Copy link
Member

nebhale commented Mar 9, 2015

Why are you keen on getting a Metaspace GC earlier than you'd absolutely have to? As long as the container has enough memory to handle MaxMetaspaceSize, it shouldn't matter if the GC is postponed (possibly forever).

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

@nebhale I believe that Metaspace GC does do some cleanup that reduces the RSS allocation of the Java process. I think that it does matter that Metaspace GC is postponed. I'm currently testing this hypothesis with the webapp I'm maintaining and running in Pivotal WS.

Since setting MetaspaceSize doesn't really do any pre-allocation, it shouldn't be set by default.
I haven't found any docs that recommend setting MetaspaceSize=MaxMetaspaceSize .

@nebhale
Copy link
Member

nebhale commented Mar 9, 2015

Let's assume for a moment that the MetaSpaceSize is not set. In that case, you could still end up with an OOM based on the Xss contribution. Not setting it hasn't really solved any problem. The solution to the problem is to better account for Xss contribution.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

@nebhale From my understanding metaspace isn't allocated from the heap at all.

In JDK 8, the permanent generation was removed and the class metadata is allocated in native memory. (source: HotSpot Virtual Machine Garbage Collection Tuning Guide, Class Metadata)

Why would better accounting for the Xss contribution help?

@nebhale
Copy link
Member

nebhale commented Mar 9, 2015

To be more precise, we need to better account for all types of memory allocation. We know exactly how much memory the Warden container will allow us before the process is killed. Given that, we can back out how much heap, permgen/metaspace, native, and stack needs to be allocated to keep us under that number. If you're seeing a situation where the amount of memory being used is going over what Warden is allowing and the process is being terminated because of it, it means that our accounting for how the total memory space is used, is wrong. When we solve that problem, the rest of this disappears.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

To be more precise, we need to better account for all types of memory allocation. We know exactly how much memory the Warden container will allow us before the process is killed. Given that, we can back out how much heap, permgen/metaspace, native, and stack needs to be allocated to keep us under that number.

@nebhale I agree here. That's the primary reason I opened #157 and #158 so that we can improve how much memory should be reserved for thread stacks and limit the number of threads in the Tomcat container to the amount we have reserved.

If you're seeing a situation where the amount of memory being used is going over what Warden is allowing and the process is being terminated because of it, it means that our accounting for how the total memory space is used, is wrong.

I disagree here. It cannot be solved by just improving memory calculation in the Java buildpack. It looks like it's essential to tune the glibc memory allocation settings like MALLOC_ARENA_MAX to get the memory usage in control. That's what #160 is about.
These IBM blog posts by Kevin Grigorenko explain what the problem is about. I also posted these links to #160.
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_native_memory_fragmentation_and_process_size_growth?lang=en

@nebhale
Copy link
Member

nebhale commented Mar 9, 2015

Well, for #158, there is already a limit, we just need to make sure that we take it into account (changing it if we get signoff from @markt-asf). #157 has veered off into dealing with glibc optimizations rather than addressing the issue of thread count during calculations. If we can bring it back into focus, we'll have something to prioritize there.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

@nebhale I hope you can bare the off-topic comments in #157. I was making notes about the progress I made in investigating the production issue and making conclusions based on the observations. I hope this is valuable in improving Java Buildpack.

@lhotari
Copy link
Author

lhotari commented Mar 9, 2015

I have now made an observation that after I started using the buildpack fork that doesn't set MetaspaceSize, the RSS size of the Java process is growing with a much slower rate. It still hits the limit and the process gets killed, but it's not happening so ofter because the RSS size is growing with a much slower rate.

I'm getting memory info from the production app with the MemoryInfoServlet solution. I have a cron job on one of my linux boxes to log that info. I'm basing my observations on the data I've been gathering this way.

I haven't yet started using the MALLOC_ARENA_MAX setting so that I can be sure about this hypothesis that setting MetaspaceSize causes the process to consume more resident memory (RSS). I will check the MALLOC_ARENA_MAX assumption (#160) in my production environment after I get this MetaspaceSize testing done. I believe the rest of "the lost memory" issue will be solved by setting MALLOC_ARENA_MAX to a low value.

@nebhale
Copy link
Member

nebhale commented Mar 25, 2015

@lhotari do you have any further information that setting MetaspaceSize is causing issues in main-line applications? Even if it doesn't pre-allocate memory, it does reduce GC early in an application's life:

Specify a higher value for the option MetaspaceSize to avoid early garbage collections induced for class metadata.

As that's the reason we matched them in the first place, I'm inclined to stick with it unless you've got something compelling that overrides that benefit.

@lhotari
Copy link
Author

lhotari commented Mar 25, 2015

@nebhale No. I've been posting quite a lot of information here and explained my reasoning. I don't have any new information about it.

Even if it doesn't pre-allocate memory, it does reduce GC early in an application's life:

Specify a higher value for the option MetaspaceSize to avoid early garbage collections induced for class metadata.

As that's the reason we matched them in the first place, I'm inclined to stick with it unless you've got
something compelling that overrides that benefit.

The class metadata garbage collection doesn't reduce normal JVM heap space GC since class metadata isn't using the JVM heap in Java 8.
The class metadata garbage collection helps reduce the JVM memory overhead.
There seems to be some side-effects in the class metadata garbage collection that reduces the RSS usage of the Java process.
I have experienced this with the application I've been tuning.

Where can you find recommendation to set MetaspaceSize to the same value as MaxMetaspaceSize?
I haven't found a reason for doing that since setting MetaspaceSize doesn't really do any pre-allocation.

@nebhale
Copy link
Member

nebhale commented Mar 25, 2015

@lhotari That quote came from the link where you references the pre-allocation. We've set it to minimize that GC, but have no other recommendation to set it.

@lhotari
Copy link
Author

lhotari commented Mar 25, 2015

@nebhale Is there some reason to minimize that GC (class metadata garbage collection in Java 8)?

@nebhale
Copy link
Member

nebhale commented Mar 25, 2015

Simply the best possible startup time. We match the heap sizes as well just to ensure that no time is lost to the GC at startup.

@lhotari
Copy link
Author

lhotari commented Mar 25, 2015

@nebhale Yes there is always tradeoffs involved.
I don't think that "class metadata garbage collection" will ever cause measurable differences in JVM startup time. The overhead of hotspot compilation is so much larger in JVM startup.
Talking about GC, the hotspot compiler itself does a lot of garbage collection. There is a reason why it does this and for example setting InitialCodeCacheSize to the same value as ReservedCodeCacheSize is something you shouldn't do either.
Is there still some reason to set MetaspaceSize to the same value as MaxMetaspaceSize? :)

@cagiti
Copy link

cagiti commented May 6, 2015

We've performed some testing with this change, and if you set the MaxMetaspaceSize and MetaspaceSize to the same value then it disables garbage collection. This is a vital change to the buildpack.

@nebhale
Copy link
Member

nebhale commented May 6, 2015

@cagiti Do you have any documentation stating that it disables garbage collection? Specifically, I'm wondering if you're seeing that GC never happens because there is so much headroom, vs. it going OOM without ever even trying.

@cagiti
Copy link

cagiti commented May 6, 2015

@nebhale we are monitoring our application using newrelic, this shows that under testing without this change we did not witness GC Collection, it shows the GC Sweep. However, testing of this change shows that when the MetaspaceSize is not set then GC Collection occurs.

@nebhale
Copy link
Member

nebhale commented May 6, 2015

Right, so it seems like what you are seeing is that GC is happening when needed, it's just not needed very often (which is what I'd expect with this configuration). Unless you're seeing OOM problems, I'm not sure you can deduce that garbage collection is disabled.

Keep in mind that in each container, it's perfectly reasonable for your application to use the entire allocated memory space. There's no need for a GC unless you've got so much garbage you can't say under the max.

@lhotari
Copy link
Author

lhotari commented May 6, 2015

@cagiti @nebhale I wouldn't expect this change to affect JVM Heap GC in any ways.

@cgfrost
Copy link
Contributor

cgfrost commented Jul 23, 2015

Hi @lhotari
So this pull request has fallen behind master quite a bit and support in this area has moved on. We are prepared to investigate MALLOC_ARENA_MAX further but considering that it can easily be set with an environment variable I'm still inclined to say the default is best for the common use case and others can set it as required.

Also, we have some other work underway that will allow the setting of initial memory (Xms, MetaspaceSize etc..) independently from the maximum. The default will remain with the current behavior, initial and max memory set the same but it will be easy to change with an environment variable. #200

With this is mind would you like to update this pull request and we will proceed with the investigations or will the changes coming satisfy your needs here.

Thanks, Chris.

@lhotari
Copy link
Author

lhotari commented Jul 24, 2015

@cgfrost . this pull request doesn't make any sense any more since memory calculations are now in https://github.com/cloudfoundry/java-buildpack-memory-calculator .
I hope there will be a way to make the JVM start up without any "-XX:MetaspaceSize=" parameter. I have tried to explain why this makes sense in my earlier comments. I hope you review them again.
#159 (comment) is the comment.

@lhotari lhotari closed this Jul 24, 2015
@cgfrost
Copy link
Contributor

cgfrost commented Jul 24, 2015

I have made a comment on the other issue where we are making changes to the memory calculator about supporting no MetaspaceSize param at all. Thanks for the quick response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants