-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add default requests and limits to the init containers #2186
Conversation
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I think that's a good thing to have, thanks for creating the PR!
I'd like to optimize the resources requirements a bit more, let me know if that makes sense to you.
Limits: map[corev1.ResourceName]resource.Quantity{ | ||
corev1.ResourceMemory: resource.MustParse("1Gi"), | ||
corev1.ResourceCPU: resource.MustParse("1"), | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Picking good default numbers is pretty hard. I did a few tests, here are my observations.
ES Pod consideration
ES Pod defaults are currently set to 2Gi RAM (requests and limits). And no CPU limitation.
We could set the same defaults for the init container that runs before the Pod since we need those resources anyway, but I think it's cleaner if we aim for lower more realistic values.
Even though no CPU is specified, we can reasonably say "0.1" CPU seems to be a fair strict minimum for an Elasticsearch Pod (I expect "1" to be more realistic) .
CPU requirements
Assigning both request and limit of "100m"
CPU to the init container still gives pretty good performance on a n1-standard-8
GKE machine. The initContainer took about a second from start to end. It obviously depend on the underlying CPU, but I think we could aim for a lower request here. "1" seems to be a generally acceptable resource limit, since the value is relative to the underlying CPU. I suggest we lower the request to "100Mi".
Memory requirements
I set both requests and limits to "10Mi" RAM, everything worked fine with a fresh clean cluster. So far the init container only does filesystem operations, there should be no hard RAM requirement. I think there's no reason not to use this limit: if it the initContainer breaks we'll notice it and realize we probably did something wrong.
Suggestion
I think we could use the following values:
defaultResources = corev1.ResourceRequirements{
Requests: map[corev1.ResourceName]resource.Quantity{
corev1.ResourceMemory: resource.MustParse("10Mi"),
corev1.ResourceCPU: resource.MustParse("0.1"),
},
Limits: map[corev1.ResourceName]resource.Quantity{
corev1.ResourceMemory: resource.MustParse("10Mi"),
corev1.ResourceCPU: resource.MustParse("1"),
},
}
Let me know if that makes sense to you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 overall but I think we want to make sure requests == limits, otherwise the init containers default settings will keep the pod from having guaranteed QoS even if the user sets the elasticsearch requests == limits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! I'll change it now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sanderma I think Anya is right here, we should probably aim for requests==limits.
I think it's fine to request "0.1" CPU for both requests and limits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I agree. It is usually better to set it that way. It's only useful to have a higher limit if you expect infrequent bursts in resource consumption.
I'll set them to 0.1.
@sanderma could you also sign the CLA? |
Is it maybe worthwhile to also document the names of the init container so people know how to modify them if necessary? As is I think you'd only find out via experimentation or reading the code. That said it should be very rare to need to know this information, so maybe we can leave it as is for now. |
The problem with the init containers is that it is hard to change the
podtemplate. If you change anything (say the resource limits) the defaults
are not applied correctly. That is something I wanted to solve after this ;)
met vriendelijke groet,
Sander Mathijssen
|
Done! |
Jenkins test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @sanderma!
See issue: init containers should have request and limit set. #2179
This adds requests and limits to the fs init pod.
The setting might be too large, but that is easily fixed.