-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Node Affinity for TaskRuns that share PVC workspace #2630
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,7 +15,7 @@ weight: 5 | |
- [Mapping `Workspaces` in `Tasks` to `TaskRuns`](#mapping-workspaces-in-tasks-to-taskruns) | ||
- [Examples of `TaskRun` definition using `Workspaces`](#examples-of-taskrun-definition-using-workspaces) | ||
- [Using `Workspaces` in `Pipelines`](#using-workspaces-in-pipelines) | ||
- [Specifying `Workspace` order in a `Pipeline`](#specifying-workspace-order-in-a-pipeline) | ||
- [Affinity Assistant and specifying `Workspace` order in a `Pipeline`](#affinity-assistant-and-specifying-workspace-order-in-a-pipeline) | ||
- [Specifying `Workspaces` in `PipelineRuns`](#specifying-workspaces-in-pipelineruns) | ||
- [Example `PipelineRun` definition using `Workspaces`](#example-pipelinerun-definition-using-workspaces) | ||
- [Specifying `VolumeSources` in `Workspaces`](#specifying-volumesources-in-workspaces) | ||
|
@@ -89,7 +89,8 @@ To configure one or more `Workspaces` in a `Task`, add a `workspaces` list with | |
|
||
Note the following: | ||
|
||
- A `Task` definition can include as many `Workspaces` as it needs. | ||
- A `Task` definition can include as many `Workspaces` as it needs. It is recommended that `Tasks` use | ||
**at most** one _writable_ `Workspace`. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. NIT/Ditto: shall we point to an explanation here? Is this true for any kind of workspace, regardless of the type of volume backing them? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for reviewing!
I write recommend since it is not strictly needed, but for a Task to be usable in most clusters. We have workspace volume sources: You can have multiple writeable PVCs with access mode Using PVC with access mode But as you say, you can use multiple writable volumes, but you need to be careful and know what you are doing. The improved performance with this PR was a side-effect, my main motivation was to make it easy to use commonly availably PVCs in parallel without deadlocks or Tasks that are timed out (as in the current warning). I think it is good that we recommend to design Tasks with only one writeable workspace - this makes it fully functional in all cases when using this feature. There is probably other good solutions for cases that first was designed with multiple writable workspaces? e.g. using buckets or other storage that is not limited to only one AZ and is not mounted on the Node? This is a complex field were I run into many problems the last weeks, we should improve the documentation as you say. For me it is also important to make Tekton easy to use without corner-cases in a non-technical way :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
+1 - could also be a follow-up PR if you want
I agree on the recommendation, I was just wondering if we should document some of the reasoning behind it, or what issues one might run into when using more than one.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I'll take more extensive documentation about PVCs and what to think about in a separate PR. |
||
- A `readOnly` `Workspace` will have its volume mounted as read-only. Attempting to write | ||
to a `readOnly` `Workspace` will result in errors and failed `TaskRuns`. | ||
- `mountPath` can be either absolute or relative. Absolute paths start with `/` and relative paths | ||
|
@@ -204,26 +205,27 @@ Include a `subPath` in the workspace binding to mount different parts of the sam | |
|
||
The `subPath` specified in a `Pipeline` will be appended to any `subPath` specified as part of the `PipelineRun` workspace declaration. So a `PipelineRun` declaring a Workspace with `subPath` of `/foo` for a `Pipeline` who binds it to a `Task` with `subPath` of `/bar` will end up mounting the `Volume`'s `/foo/bar` directory. | ||
|
||
#### Specifying `Workspace` order in a `Pipeline` | ||
#### Affinity Assistant and specifying `Workspace` order in a `Pipeline` | ||
|
||
Sharing a `Workspace` between `Tasks` requires you to define the order in which those `Tasks` | ||
will be accessing that `Workspace` since different classes of storage have different limits | ||
for concurrent reads and writes. For example, a `PersistentVolumeClaim` with | ||
[access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) | ||
`ReadWriteOnce` only allow `Tasks` on the same node writing to it at once. | ||
|
||
Using parallel `Tasks` in a `Pipeline` will work with `PersistentVolumeClaims` configured with | ||
[access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) | ||
`ReadWriteMany` or `ReadOnlyMany` but you must ensure that those are available for your storage class. | ||
When using `PersistentVolumeClaims` with access mode `ReadWriteOnce` for parallel `Tasks`, you can configure a | ||
workspace with it's own `PersistentVolumeClaim` for each parallel `Task`. | ||
|
||
Use the `runAfter` field in your `Pipeline` definition to define when a `Task` should be executed. For more | ||
information, see the [`runAfter` documentation](pipelines.md#runAfter). | ||
|
||
**Warning:** You *must* ensure that this order is compatible with the configured access modes for your `PersistentVolumeClaim`. | ||
Parallel `Tasks` using the same `PersistentVolumeClaim` with access mode `ReadWriteOnce`, may execute on | ||
different nodes and be forced to execute sequentially which may cause `Tasks` to time out. | ||
write to or read from that `Workspace`. Use the `runAfter` field in your `Pipeline` definition | ||
to define when a `Task` should be executed. For more information, see the [`runAfter` documentation](pipelines.md#runAfter). | ||
|
||
When a `PersistentVolumeClaim` is used as volume source for a `Workspace` in a `PipelineRun`, | ||
an Affinity Assistant will be created. The Affinity Assistant acts as a placeholder for `TaskRun` pods | ||
sharing the same `Workspace`. All `TaskRun` pods within the `PipelineRun` that share the `Workspace` | ||
will be scheduled to the same Node as the Affinity Assistant pod. This means that Affinity Assistant is incompatible | ||
with e.g. NodeSelectors or other affinity rules configured for the `TaskRun` pods. The Affinity Assistant | ||
is deleted when the `PipelineRun` is completed. The Affinity Assistant can be disabled by setting the | ||
[disable-affinity-assistant](install.md#customizing-basic-execution-parameters) feature gate. | ||
|
||
**Note:** Affinity Assistant use [Inter-pod affinity and anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) | ||
that require substantial amount of processing which can slow down scheduling in large clusters | ||
significantly. We do not recommend using them in clusters larger than several hundred nodes | ||
|
||
**Note:** Pod anti-affinity requires nodes to be consistently labelled, in other words every | ||
node in the cluster must have an appropriate label matching `topologyKey`. If some or all nodes | ||
are missing the specified `topologyKey` label, it can lead to unintended behavior. | ||
|
||
#### Specifying `Workspaces` in `PipelineRuns` | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,205 @@ | ||
# This example shows how both sequential and parallel Tasks can share data | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ❤️ Documentation on tests, thank you!! |
||
# using a PersistentVolumeClaim as a workspace. The TaskRun pods that share | ||
# workspace will be scheduled to the same Node in your cluster with an | ||
# Affinity Assistant (unless it is disabled). The REPORTER task does not | ||
# use a workspace so it does not get affinity to the Affinity Assistant | ||
# and can be scheduled to any Node. If multiple concurrent PipelineRuns are | ||
# executed, their Affinity Assistant pods will repel eachother to different | ||
# Nodes in a Best Effort fashion. | ||
# | ||
# A PipelineRun will pass a message parameter to the Pipeline in this example. | ||
# The STARTER task will write the message to a file in the workspace. The UPPER | ||
# and LOWER tasks will execute in parallel and process the message written by | ||
# the STARTER, and transform it to upper case and lower case. The REPORTER task | ||
# is will use the Task Result from the UPPER task and print it - it is intended | ||
# to mimic a Task that sends data to an external service and shows a Task that | ||
# doesn't use a workspace. The VALIDATOR task will validate the result from | ||
# UPPER and LOWER. | ||
# | ||
# Use the runAfter property in a Pipeline to configure that a task depend on | ||
# another task. Output can be shared both via Task Result (e.g. like REPORTER task) | ||
# or via files in a workspace. | ||
# | ||
# -- (upper) -- (reporter) | ||
# / \ | ||
# (starter) (validator) | ||
# \ / | ||
# -- (lower) ------------ | ||
|
||
apiVersion: tekton.dev/v1beta1 | ||
kind: Pipeline | ||
metadata: | ||
name: parallel-pipeline | ||
spec: | ||
params: | ||
- name: message | ||
type: string | ||
|
||
workspaces: | ||
- name: ws | ||
|
||
tasks: | ||
- name: starter # Tasks that does not declare a runAfter property | ||
taskRef: # will start execution immediately | ||
name: persist-param | ||
params: | ||
- name: message | ||
value: $(params.message) | ||
workspaces: | ||
- name: task-ws | ||
workspace: ws | ||
subPath: init | ||
|
||
- name: upper | ||
runAfter: # Note the use of runAfter here to declare that this task | ||
- starter # depends on a previous task | ||
taskRef: | ||
name: to-upper | ||
params: | ||
- name: input-path | ||
value: init/message | ||
workspaces: | ||
- name: w | ||
workspace: ws | ||
|
||
- name: lower | ||
runAfter: | ||
- starter | ||
taskRef: | ||
name: to-lower | ||
params: | ||
- name: input-path | ||
value: init/message | ||
workspaces: | ||
- name: w | ||
workspace: ws | ||
|
||
- name: reporter # This task does not use workspace and may be scheduled to | ||
runAfter: # any Node in the cluster. | ||
- upper | ||
taskRef: | ||
name: result-reporter | ||
params: | ||
- name: result-to-report | ||
value: $(tasks.upper.results.message) # A result from a previous task is used as param | ||
|
||
- name: validator # This task validate the output from upper and lower Task | ||
runAfter: # It does not strictly depend on the reporter Task | ||
- reporter # But you may want to skip this task if the reporter Task fail | ||
- lower | ||
taskRef: | ||
name: validator | ||
workspaces: | ||
- name: files | ||
workspace: ws | ||
--- | ||
apiVersion: tekton.dev/v1beta1 | ||
kind: Task | ||
metadata: | ||
name: persist-param | ||
spec: | ||
params: | ||
- name: message | ||
type: string | ||
results: | ||
- name: message | ||
description: A result message | ||
steps: | ||
- name: write | ||
image: ubuntu | ||
script: echo $(params.message) | tee $(workspaces.task-ws.path)/message $(results.message.path) | ||
workspaces: | ||
- name: task-ws | ||
--- | ||
apiVersion: tekton.dev/v1beta1 | ||
kind: Task | ||
metadata: | ||
name: to-upper | ||
spec: | ||
description: | | ||
This task read and process a file from the workspace and write the result | ||
both to a file in the workspace and as a Task Result. | ||
params: | ||
- name: input-path | ||
type: string | ||
results: | ||
- name: message | ||
description: Input message in upper case | ||
steps: | ||
- name: to-upper | ||
image: ubuntu | ||
script: cat $(workspaces.w.path)/$(params.input-path) | tr '[:lower:]' '[:upper:]' | tee $(workspaces.w.path)/upper $(results.message.path) | ||
workspaces: | ||
- name: w | ||
--- | ||
apiVersion: tekton.dev/v1beta1 | ||
kind: Task | ||
metadata: | ||
name: to-lower | ||
spec: | ||
description: | | ||
This task read and process a file from the workspace and write the result | ||
both to a file in the workspace and as a Task Result | ||
params: | ||
- name: input-path | ||
type: string | ||
results: | ||
- name: message | ||
description: Input message in lower case | ||
steps: | ||
- name: to-lower | ||
image: ubuntu | ||
script: cat $(workspaces.w.path)/$(params.input-path) | tr '[:upper:]' '[:lower:]' | tee $(workspaces.w.path)/lower $(results.message.path) | ||
workspaces: | ||
- name: w | ||
--- | ||
apiVersion: tekton.dev/v1beta1 | ||
kind: Task | ||
metadata: | ||
name: result-reporter | ||
spec: | ||
description: | | ||
This task is supposed to mimic a service that post data from the Pipeline, | ||
e.g. to an remote HTTP service or a Slack notification. | ||
params: | ||
- name: result-to-report | ||
type: string | ||
steps: | ||
- name: report-result | ||
image: ubuntu | ||
script: echo $(params.result-to-report) | ||
--- | ||
apiVersion: tekton.dev/v1beta1 | ||
kind: Task | ||
metadata: | ||
name: validator | ||
spec: | ||
steps: | ||
- name: validate-upper | ||
image: ubuntu | ||
script: cat $(workspaces.files.path)/upper | grep HELLO\ TEKTON | ||
- name: validate-lower | ||
image: ubuntu | ||
script: cat $(workspaces.files.path)/lower | grep hello\ tekton | ||
workspaces: | ||
- name: files | ||
--- | ||
apiVersion: tekton.dev/v1beta1 | ||
kind: PipelineRun | ||
metadata: | ||
generateName: parallel-pipelinerun- | ||
jlpettersson marked this conversation as resolved.
Show resolved
Hide resolved
|
||
spec: | ||
params: | ||
- name: message | ||
value: Hello Tekton | ||
pipelineRef: | ||
name: parallel-pipeline | ||
workspaces: | ||
- name: ws | ||
volumeClaimTemplate: | ||
spec: | ||
accessModes: | ||
- ReadWriteOnce | ||
resources: | ||
requests: | ||
storage: 1Gi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: perhaps we could have a link here that points to an explanation about why we do recommend this