-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated taskspec with nodeselector to allow for different nodeselector for each task. #2297
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @NikeNano. Thanks for your PR. I'm waiting for a tektoncd member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @afrittoli |
/ok-to-test |
I think of the ideas with the PodTemplate is that it is part of the Run object types and not the Task/Pipeline. This makes the Task/Pipeline types reusable across clusters. The use case is interesting though so I'll put this on the Working Group agenda to see what others think of this |
@NikeNano: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
||
// Set the NodeSelector on PipelineTask level to allow | ||
// for different NodeSelectors for different steps. | ||
NodeSelector map[string]string `json:"nodeSelector,omitempty"` | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there need to be a validation so that NodeSelector
and Workspace
is not used together. Tasks using Workspace need to be scheduled to where the e.g. PVC (volume) is bound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx, I will check it out :). New to tekton so need to read up on workspaces!
Would it be better suited to move the functionality to the run objects using the names/tags to set nodeselectors to specific tasks during run? This PR is related to working on a PR for kubeflow pipelines related to tekton https://github.com/kubeflow/kfp-tekton. |
@NikeNano You can already specficy |
Ah I see, what you need is a "by pipelinetask" podtemplate, a little bit like what we have for |
Yeah, I would guess this is a use case that could be relevant for several uses cases outside the efforts of Kubeflow pipeline + Tekton. |
Actually this is a bit trickier than I initially thought. Putting a hold to discuss this more. The general needs is : being able to override runtime info ( Right now, we do this only for spec:
serviceAccountName: sa-1
serviceAccountNames:
- taskName: build-task
serviceAccountName: sa-for-build Adding a spec:
serviceAccountName: sa-1
podTemplate:
securityContext:
runAsNonRoot: true
# […]
taskOverride: # <- need a better name
- name: build-task
serviceAccountName: sa-for-build
podTemplate:
securityContext:
runAsNonRoot: true
# […] @bobcatfish @afrittoli @sbwsg @NikeNano wdyt ? I would really love to keep those k8s specific and runtime field in /hold |
I don't follow do you mean that the following:
is good or bad? I am new to Tekton so don't have that much of an opinion, but the above looks good to me. Have use argo a little bit and some times get the feeling that Tekton is more verbose and even simpler pipelines becomes quite verbose. I would be happy to contribute if changes are needed in the controller or similar to handle the changes! |
Any progress on this @dibyom ? :) |
I like the |
I completely agree @vdemeester Before we go ahead with anything @NikeNano could you describe in a bit more detail the use case where you need this? I think it will help us to figure out the best solution |
@bobcatfish I made this initial PR as parts of the efforts for Kubeflow Pipelines to support both argo and Tekton as the underlaying infrastructure for execution. For Kubeflow Pipelines node selectors can have several use cases one of which is to allow for the usage of GPU resources example from the docs. I guess some users also want to have the controller of which node which workload should go to and thus utalize node selector. For the use case with GPU:s not all steps of a pipeline usually have the possibility to utilise GPU resources and thus the current setup where one PodTemplateSpec is used is not sufficient. |
Thanks for the explanation @NikeNano ! |
Sorry... a little late to the party here. We "kustomize" our Tasks and Pipelines on-the-fly just prior to execution to support things like serviceName, namespace, runtimeclass, add bonus annotations etc. to support these kinds of flows. Maybe one option to consider is to somehow provide first-class support for kustomize in our "run" CRDs instead of adding our own override syntax. |
Could you point me to where in the code base this is done @skaegi? Would like to take a look to see how it is done :) To me it sounds like this "kustomize" is very similar, how would this be specified? |
I'm embarrassed to say our agent is proprietary right now but kustomize is a technology built into kubectl and described here -- https://github.com/kubernetes-sigs/kustomize |
Ahh .... thanks for the link! |
I opened #2362 to look specifically at adding per task override that @vdemeester first mentioned as I think it might be the cleanest approach. The kustomization we do in our agent is different in that we are also altering non-runtime definitions. It's interesting but I think more flexible than what is needed. |
@NikeNano it would make sense yes 😉 |
@vdemeester: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The goal with this PR is to allow for different task in a Pipeline to have different node selectors.
Currently it is only possible(correct my if I am wrong) to set the nodeslector on the podTemplate spec, which will result in that all task have the same node selector. This limites the node selection to one choice and also limits the workflow usage.
My first contribution to Tekton, super happy for feedback and any suggestions.
Changes
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide for more details.
Double check this list of stuff that's easy to miss:
cmd
dir, please updatethe release Task to build and release this image.
Reviewer Notes
If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.
Release Notes