Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add default requests and limits to the init containers #2186

Merged
merged 3 commits into from
Dec 4, 2019

Conversation

sanderma
Copy link
Contributor

See issue: init containers should have request and limit set. #2179

This adds requests and limits to the fs init pod.
The setting might be too large, but that is easily fixed.

@elasticmachine
Copy link

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

Copy link
Contributor

@sebgl sebgl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I think that's a good thing to have, thanks for creating the PR!
I'd like to optimize the resources requirements a bit more, let me know if that makes sense to you.

pkg/controller/elasticsearch/initcontainer/prepare_fs.go Outdated Show resolved Hide resolved
Limits: map[corev1.ResourceName]resource.Quantity{
corev1.ResourceMemory: resource.MustParse("1Gi"),
corev1.ResourceCPU: resource.MustParse("1"),
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Picking good default numbers is pretty hard. I did a few tests, here are my observations.

ES Pod consideration

ES Pod defaults are currently set to 2Gi RAM (requests and limits). And no CPU limitation.
We could set the same defaults for the init container that runs before the Pod since we need those resources anyway, but I think it's cleaner if we aim for lower more realistic values.
Even though no CPU is specified, we can reasonably say "0.1" CPU seems to be a fair strict minimum for an Elasticsearch Pod (I expect "1" to be more realistic) .

CPU requirements

Assigning both request and limit of "100m" CPU to the init container still gives pretty good performance on a n1-standard-8 GKE machine. The initContainer took about a second from start to end. It obviously depend on the underlying CPU, but I think we could aim for a lower request here. "1" seems to be a generally acceptable resource limit, since the value is relative to the underlying CPU. I suggest we lower the request to "100Mi".

Memory requirements

I set both requests and limits to "10Mi" RAM, everything worked fine with a fresh clean cluster. So far the init container only does filesystem operations, there should be no hard RAM requirement. I think there's no reason not to use this limit: if it the initContainer breaks we'll notice it and realize we probably did something wrong.

Suggestion

I think we could use the following values:

defaultResources = corev1.ResourceRequirements{
		Requests: map[corev1.ResourceName]resource.Quantity{
			corev1.ResourceMemory: resource.MustParse("10Mi"),
			corev1.ResourceCPU:    resource.MustParse("0.1"),
		},
		Limits: map[corev1.ResourceName]resource.Quantity{
			corev1.ResourceMemory: resource.MustParse("10Mi"),
			corev1.ResourceCPU:    resource.MustParse("1"),
		},
	}

Let me know if that makes sense to you!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 overall but I think we want to make sure requests == limits, otherwise the init containers default settings will keep the pod from having guaranteed QoS even if the user sets the elasticsearch requests == limits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I'll change it now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sanderma I think Anya is right here, we should probably aim for requests==limits.
I think it's fine to request "0.1" CPU for both requests and limits?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I agree. It is usually better to set it that way. It's only useful to have a higher limit if you expect infrequent bursts in resource consumption.
I'll set them to 0.1.

@sebgl
Copy link
Contributor

sebgl commented Dec 3, 2019

@sanderma could you also sign the CLA?

@anyasabo
Copy link
Contributor

anyasabo commented Dec 3, 2019

Is it maybe worthwhile to also document the names of the init container so people know how to modify them if necessary? As is I think you'd only find out via experimentation or reading the code.

That said it should be very rare to need to know this information, so maybe we can leave it as is for now.

@sanderma
Copy link
Contributor Author

sanderma commented Dec 4, 2019 via email

@sanderma
Copy link
Contributor Author

sanderma commented Dec 4, 2019

@sanderma could you also sign the CLA?

Done!

@sebgl
Copy link
Contributor

sebgl commented Dec 4, 2019

Jenkins test this please

Copy link
Contributor

@sebgl sebgl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @sanderma!

@sebgl sebgl merged commit 58f0aec into elastic:master Dec 4, 2019
@pebrc pebrc added >enhancement Enhancement of existing functionality v1.0.0 labels Dec 12, 2019
@thbkrkr thbkrkr changed the title Added limits to initpod Add default requests and limits to the init containers Dec 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement Enhancement of existing functionality v1.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants