-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some triggered Tekton jobs should have resource requests/limits #1122
Comments
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
/lifecycle frozen |
The ones I notice right now are the
plumbing-image-build
andpull-pipeline-kind-k8s-v1-21-e2e
PRPipelineRun
s, and thebuild-and-push-test-runner
cronjob triggeredPipelineRun
. I've seen thetest-runner
image builds cause OOMs on their nodes, and theplumbing-image-build
one I'm looking at right now is at over 5gb memory used. Thepull-pipeline-kind-k8s-v1-21-e2e
pods that I've seen have ranged between 2 and 4gb memory used.None of them (or any of the other Tekton
PipelineRun
s, for that matter) have anyrequests
orlimits
configured, so they can end up on the same node, or a node with one of the other high memory usage pods always running in the cluster (i.e., prometheus and kafka) and cause problems. Given that dogfooding is hardcoded to 5 n1-standard-4s, with ~13gb allocatable memory, it's pretty easy for just a few of the high memory pods to end up on the same node and swamp it.The text was updated successfully, but these errors were encountered: