-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid unecessary work within reconciliation loop - LimitRanges #2666
Comments
This artifact store lookup is addressed by #2947 |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
In a larger organization with e.g. 400+ apps using Tekton CI/CD pipelines, and where developers usually do
git-push
to trigger newPipelineRuns
with typically 10-20 tasks, several times a day, manyTaskRuns
is created.I expect that Tekton is an efficient CI/CD platform and also is sound with the requests to the Kubernetes control plane, especially etcd that is a critical component in any Kubernetes cluster.
Actual Behavior
A typical git-to-deploy
Pipeline
will probably consist of 10-20TaskRun
pods to be created.For every
TaskRun
to be created we do (at least) these requests to the ApiServer — this is synchronous request-response pairs that also delayTaskRun
creation:PipelineResources
and pre-workspaces - we may rethink this.creds-init
— this may be redesigned with Improve UX of getting credentials into Tasks #2343Looking up LimitRanges is a bit strange responsibility for the
TaskRun
-controller within the reconciliation loop. There is a few use-cases:The Namespace does not have an LimitRange specified (I assume this is the most common case)
In this case a
LIST
operation against ApiServer andetcd
is not needed for everyTaskRun
The Namespace does have a LimitRange specified, and it contains Default values. (I assume this to be the second most common case)
This is fine, and this should work without that Tekton need to do any extra job. We even have an example of this.
In this case a
LIST
operation against ApiServer andetcd
is not needed for everyTaskRun
The Namespace does have a LimitRange specified, but without Default values. (I assume this to be the rarest case).
This is the only time our extra request for every TaskRun provide any value. But is this the responsibility of the TaskRun-controller within the reconciliation loop?
In an environment where operations or platform-team has configured it like this, it probably means that they want the users to specify resource limit explicitly for their workload.
Task
may declare a Step template to be used for its steps.Step Template
with this, as pointed out in Add global stepTemplate config #2600PipelineRun
defaultStep template
- specified in aTriggerTemplate
.All above is good alternatives to lookup a namespace configuration for every
TaskRun
creation (a very common operation when using Tekton).The text was updated successfully, but these errors were encountered: