Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TEP-0104]: Support Task-level resource limits #703

Merged
merged 1 commit into from
Jun 8, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
status title creation-date last-updated authors
implementable
Task-level Resource Requests
2022-04-08
2022-04-08
@lbernick
@vdemeester
@jerop
status title creation-date last-updated authors
implementable
Task-level Resource Requirements
2022-04-08
2022-05-11
@lbernick
@vdemeester
@jerop

TEP-0104: Task-level Resource RequestsRequirements

Summary

Tekton currently provides Step-level configuration for Kubernetes resource requirements via the Task and TaskRun specs. This document proposes allowing users to configure the overall resource requests of Tekton Tasks and TaskRuns.

Pod resource requirements are used for scheduling, eviction, and quality of service. Kubernetes will only schedule a pod to a node that has enough resources to accommodate its requests, and will reserve enough system resources to meet the pod’s requests. In addition, if a pod exceeds its memory requests, it may be evicted from the node. Limits are enforced by both the kubelet and container runtime (via cgroups). If a container uses more memory than its limit, it is OOMkilled, and if it exceeds its CPU limit, it is throttled. For more information, see “Resource Management for Pods and Containers”. Resource requirements are also used to determine a pod’s quality of service, which affect how likely it is to be scheduled or evicted.

Resource requirements can't be updated after pods are created.

Resource requirements in Tekton

Tekton Steps correspond to containers, and resource requirements can be specified on a per-Step basis. Step resource requirements can be specified via Task.StepTemplate, Task.Steps, or TaskRun.StepOverrides (increasing order of precedence).

Goals

  • Task-level resource requestsrequirements are configurable at authoring time (i.e. on Task) and runtime (i.e. on TaskRun).
    • Authoring time configuration is provided for consistency with existing functionality (Task.Step.Resources).
    • The reasons for runtime configuration are discussed in more detail in TEP-0094.

Non-Goals

  • Configuration for the amount of resources consumed by an entire PipelineRun, as requested in #4271.
    • We could still choose in the future to provide configuration on Pipeline for Task-level resource requirements (e.g. via params).
  • Parameterizing resource requirements, as requested in #4080. This would be a valuable addition to Tekton but is out of scope for this proposal.

API Changes

Augment the Task and TaskRun APIs with a resources field that allows the user to configure the resource requests of a Task. Resource limits will not be permitted (see “Requests vs Limits”)requirements of a Task. An example Task is as follows.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: image-build-task
spec:
  steps:
  - name: step-1
  - name: step-2
  resources:
    requests:
      memory: 1Gi
    limits:
      memory: 2Gi

This field should also be added to PipelineRun.TaskRunSpecs.

Requests vs Limits

Because Kubernetes considers containers without resource limits to have higher limits than those with limits configured, configuration for limits is more challenging than configuration for requests. Task-level resource limits could be implemented several ways, each with their own problems:

  • If the task resource limit is applied to only one container, the pod will not have an effective limit due to the other containers without limits. This defeats the purpose of the feature.
  • If the task limit is applied to each container, the pod has a much higher limit than desired. This is especially problematic if requests are not set, because the request will then automatically be set to the same value as the limit, and the pod may have difficulty being scheduled.
  • If the task limit is spread out among containers, a task where one step is more resource intensive than all the others could get oomkilled or throttled.
  • Limits cannot be dynamically adjusted as steps run, since container resource requirements can't be updated.

Therefore, this proposal will focus only on task-level resource requests, not limits.

Note: Currently, Task.Resources is used for PipelineResources. Appropriate naming for this field is still under discussion.

Applying Task Resources to Containers

Requests

WhenAs mentioned in Resource Requirements in Kubernetes, the effective resource requests of a pod are the sum of the resource requests of its containers, and this value is used to determine the resources reserved by the kubelet when scheduling a pod. Therefore, when a user configures a resource request for a Task, that request should be applied to one container only in the pod. (See Interaction with LimitRanges for caveats.) All other containers will not have resource requests. This will result in a pod with an effective resource request that is the same as that of the Task, and will be scheduled correctly. This is similar to Tekton’s handling of resource requests pre v0.28.0, where the maximum resource request of all containers was applied to only one container, and the rest were left without resource requests.

Limits

Because Kubernetes considers containers without resource limits to have higher limits than those with limits configured, configuration for limits is different than configuration for requests. There are several options for how Task-level resource limits could be implemented:

  • If the task resource limit is applied to only one container, the pod will not have an effective limit due to the other containers without limits. This defeats the purpose of the feature.
  • If the task limit is spread out among containers, a task where one step is more resource intensive than all the others could get oomkilled or throttled.
  • If the task limit is applied to each container, the pod has a much higher effective limit than desired.

However, the effective resource limit of a pod are not used for scheduling (see How Pods with resource requests are scheduled and How Kubernetes applies resource requests and limits). Instead, container limits are enforced by the container runtime.

This means that applying the task resource limits to each container in the pod will result in a pod with higher effective limits than desired, but which prevents any individual Step from exceeding configured limits, as is likely desired.

If Task-level limits are set, Tekton should apply the smallest possible resource request to each container. This is because containers with limits but not requests automatically have their requests set to their limits, which would result in a pod with much higher effective requests than desired.

Sidecars

Sidecar containers run in parallel with Steps, meaning that their resource requests and limits should actually be summed with Steps’ resource requirements. However, the Task-level resource requests should still be interpreted as the overall resource requests of a Task, including Steps and Sidecars. Applying resource requests to a single container still results in a pod with the correct overall resource requests. Users should notIn the case of Task-level limits, it is not clear how to distribute the limit between a Sidecar and Steps, since they run at the same time. Therefore, the Task-level resource limit should be interpreted as the limit only for Steps, and Sidecar limits should be set separately. For consistency, Task-level requests should also be interpreted as requests for Steps only. Users should be able to specify both Task resource requirements and Sidecar resource requirements.

Interaction with Step resource requirements

Because Tekton will handle the logic for the combined resource requests of a Task or TaskRun, users should not be able to specify resource requests for both the Task or TaskRun and individual Steps. This means:

  • If a Task defines StepTemplate.Resources.Requests or Step.Resources.Requests:
    • If the Task also defines Resources.Requests, it will be rejected.
    • If the Task also defines Resources, the Task will be rejected by the admission webhook.
    • If the corresponding TaskRun defines Resources.RequestsResources, the value from the TaskRun will apply and the value from the Task will be ignored.
  • If a Task or TaskRun defines Resources.RequestsResources, the admission webhook should reject TaskRuns that also define StepOverrides.Resources.Requests.

Users may choose to set Step resource limits in addition to Task-level resource requirements. If both Task resource requests and Step resource limits are configured, this is permissible as long as the maximum Step resource limit is greater than the Task resource request. The Task resource request should be applied to the container with the greatest resource limit (or with no limit, if such a container exists).

Resource types (CPU, memory, hugepages) should be considered independently. For example, if a Step definition has a CPU request and the TaskRun has an overall memory request, both will be applied.

Users should not be able to mix and match Step resource requirements and Task resource requirements, even for different types of compute resources (e.g. CPU, memory).

Interaction with LimitRanges

Users may have LimitRanges defined in a namespace where Tekton pods are run, which may define minimum or maximum resource requests per pod or container. We already update container resource requirements to comply with namespace LimitRanges, and much of this code should not need to change. If resource requests are “added” to some containers to comply with a minimum request, they should be “subtracted” from the overall total. In addition, if the total resource request would result in a container that has more than the maximum container requests permitted by the limit range, the requests may be spread out between containers. If there is no container configuration that satisfies the LimitRange, the TaskRun will be rejected.

Other Considerations

  • Tekton pods currently have a burstable quality of service class, which will not change as a result of this implementation.
  • We should consider updating our catalog Task guidelines with guidance around when to specify resource requirements.

Future Work

We should consider deprecating Task.Step.Resources, Task.StepTemplate.Resources, and TaskRun.StepOverrides. Specifying resource requirements for individual Steps is confusing and likely too granular for many CI/CD workflows.

We could also consider support for both Task-level and Step-level resource requirements if the requirements are for different types of compute resources (for example, specifying CPU request at the Step level and memory request at the Task level). However, this functionality will not be supported by the initial implementation of this proposal; it can be added later if desired.

Examples

Example with Sidecarrequests only

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
    - name: step-2
    - name: step-3
  resources:
    requests:
      cpu: 1.5
Step name CPU request CPU limit
step-1 N/A N/A
step-2 N/A N/A
step-3 1.5 N/A

Example with limits only

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
    - name: step-2
    - name: step-3
  resources:
    limits:
      cpu: 2
Step name CPU request CPU limit
step-1 smallest possible 2
step-2 smallest possible 2
step-3 smallest possible 2

(Here, the smallest possible request is based on both what the kubelet will allow, and the minimum values allowed by any LimitRange.)

Example with both requests and limits

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
    - name: step-2
    - name: step-3
  resources:
    requests:
      cpu: 1.5
    limits:
      cpu: 2
Step name CPU request CPU limit
step-1 smallest possible 2
step-2 smallest possible 2
step-3 1.5 2

(Here, the smallest possible request is based on both what the kubelet will allow, and the minimum values allowed by any LimitRange.)

Example with Sidecar

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
    - name: step-2
  sidecars:
    - name: sidecar-2
      resources:
        requests:
          cpu: 750m
        limits:
          cpu: 1
  resources:
    requests:
      cpu: 1.5

The resulting pod would have the following containers:

Step/Sidecar name CPU request CPU limit
step-1 N/A N/A
step-2 1.5 N/A
sidecar-1 N/A N/A
Step/Sidecar name CPU request CPU limit
step-1 N/A N/A
step-2 1.5 N/A
sidecar-1 750m 1

Example with LimitRange

apiVersion: v1
kind: LimitRange
metadata:
  name: my-limit-range
spec:
  limits:
    - max:
        cpu: 750m
      min:
        cpu: 250m
      type: Container

If the “Resources.Requests” field were present on the Task instead of the TaskRun, the Task would be rejected.

Example with Step resource requests overridden by TaskRun

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
      resources:
        requests:
         cpu: 500m
    - name: step-2
      resources:
        requests:
         cpu: 1

This TaskRun would be rejected.

Example with both requestsCPU and limitsmemory

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
      resources:
        limits:
         cpu: 500m
    - name: step-2
      resources:
        limits:
         cpu: 1
    - name: step-3
  resources:
    requests:
      cpu: 1.5
Step name CPU request CPU limit
step-1 N/A 500m
step-2 N/A 1
step-3 1.5 N/A

Example with both CPU and memory

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: my-task
spec:
  steps:
    - name: step-1
      resources:
        requests:
         memory: 500Mi
    - name: step-2
      resources:
        requests:
          memory: 750Mi
        limits:
         memory: 1Gi
  resources:
    requests:
      cpu: 1.5
      memory: 500Mi
    limits:
      memory: 1Gi

The resulting pod would have the following containers:

Step name CPU request CPU limit Memory request Memory limit
step-1 N/A N/A 500Mi N/A
step-2 1.5 N/A 750Mi 1Gi
Step name CPU request CPU limit Memory request Memory limit
step-1 N/A N/A 500Mi 1Gi
step-2 1.5 N/A smallest possible 1Gi

Alternatives

  • Request or implement support upstream in Kubernetes for pod-level resource requirements.
    • Since Kubernetes runs pod containers in parallel, they have no reason to implement this feature. We will also have less control over the implementation and timeline.
  • Support priority classes on Tekton pods to give users more control over the scheduling of Tekton pods.
    • This solution is a worse user experience, as it requires users to think about how their pods should be scheduled in relation to other pods that may be running on a cluster, rather than considering the pod in isolation and letting Kubernetes handle scheduling.
  • Run Tekton steps as init containers (which run sequentially).
    • We used to do this, but moved away from this because of poor support for logging and no way to support Task Sidecars (see #224).
  • Instruct users to apply their resource requests to only one Step.
    • This requires users to have a clear understanding of how resource requirements from Steps are applied (which should ideally be an implementation detail), but this is something we should make very clear anyway.
  • Apply only the maximum Step resource request and ignore all others, reverting to pre-0.28.0 behavior.
    • This would create confusion and break existing Pipelines.
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
status: implementable
title: Task-level Resource Requests
title: Task-level Resource Requirements
creation-date: '2022-04-08'
last-updated: '2022-04-08'
last-updated: '2022-05-11'
authors:
- '@lbernick'
- '@vdemeester'
- '@jerop'
---

# TEP-0104: Task-level Resource Requests
# TEP-0104: Task-level Resource Requirements

<!-- toc -->
- [Summary](#summary)
Expand All @@ -22,7 +22,6 @@ authors:
- [Existing Strategies for Controlling Resource Consumption](#existing-strategies-for-controlling-resource-consumption)
- [Proposal](#proposal)
- [API Changes](#api-changes)
- [Requests vs Limits](#requests-vs-limits)
- [Applying Task Resources to Containers](#applying-task-resources-to-containers)
- [Sidecars](#sidecars)
- [Interaction with Step resource requirements](#interaction-with-step-resource-requirements)
Expand Down Expand Up @@ -67,6 +66,8 @@ In addition, if a pod exceeds its memory requests, it may be evicted from the no
For more information, see [“Resource Management for Pods and Containers”][management].
Resource requirements are also used to determine a pod’s quality of service, which affect how likely it is to be scheduled or evicted.

Resource requirements [can't be updated after pods are created][update].

### Resource requirements in Tekton
Tekton [Steps][step] correspond to containers, and resource requirements can be specified on a per-Step basis.
Step resource requirements can be specified via [Task.StepTemplate][stepTemplate], [Task.Steps][step],
Expand All @@ -78,7 +79,7 @@ as close to the user’s configuration as possible, subject to the [minimum/maxi
TaskRuns are rejected if there is no configuration that meets these constraints.

## Goals
- Task-level resource requests are configurable at authoring time (i.e. on Task) and runtime (i.e. on TaskRun).
- Task-level resource requirements are configurable at authoring time (i.e. on Task) and runtime (i.e. on TaskRun).
- Authoring time configuration is provided for consistency with existing functionality (Task.Step.Resources).
- The reasons for runtime configuration are discussed in more detail in [TEP-0094][tep-0094].

Expand All @@ -102,9 +103,8 @@ depending on its inputs. In addition, LimitRanges don’t distinguish between Te

### API Changes

Augment the Task and TaskRun APIs with a resources field that allows the user to configure the resource requests
of a Task. Resource limits will not be permitted (see [“Requests vs Limits”](#requests-vs-limits)).
An example Task is as follows.
Augment the Task and TaskRun APIs with a resources field that allows the user to configure the resource requirements
of a Task. An example Task is as follows.

```
apiVersion: tekton.dev/v1beta1
Expand All @@ -118,63 +118,74 @@ spec:
resources:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a check for the augment on the existing Task.spec.resources field which contains inputs and outputs for TaskResources

thinking perhaps the context will be a bit diff - as inputs/outputs for running the actual build task while the added limits/requests for running the beneath Pod. but after checking other existing resources fields, like Task.Steps.* etc, think still should be a better choice to keep the same field name and add the limits and requests, as for a consistent naming on the same funcs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, unfortunately resources is an overloaded word here. However, I agree that it makes sense to use it in this context, because it is the terminology Kubernetes uses to describe compute resources, and luckily PipelineResources will be going away which should lessen the confusion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm...that's unfortunate...would the limits and requests fields be added under the current resources field?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I see the problem, this is definitely annoying but I'm hoping naming bikeshedding won't block this PR-- maybe we could call it "compute" or something

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an updates from implementation... tentatively I keep the added requests and limits under the mentioned resources field for this moment
https://github.com/tektoncd/pipeline/pull/4877/files?diff=unified&w=0#diff-f67e6d3007cea84a7a2e6301beb54c39b96b75ac9e57d888cbf5cc86c57510f3R77

thinking these reasons:

  • a consistent naming for (container computing) resources requirements as being used in all other resources field, like Task.spec.step.resources, Task.spec.stepTemplate.resources
  • comments on code level and docs updates can be done to clear the minor(but meaningful) context diffs between resources for building steps and container computing resources

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's definitely less than ideal but I think it's ok given that we'll be removing pipelineresources (and better than the alternatives). thanks for the update Austin!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: we discussed this briefly during last week API working group, and even though the resources field (PipelineResource) is going away, re-using the same field name is going to create a lot of confusion. We may have to use a more explicit field here even (resource_requirement or something)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm yeah I think this needs more discussion. How would you feel about merging this PR discussing behavior changes, and I'll open a subsequent one specifically for naming? I added a note here mentioning the name conflict with pipelineresources.

requests:
memory: 1Gi
limits:
memory: 2Gi
```

This field should also be added to [PipelineRun.TaskRunSpecs][taskRunSpecs].

### Requests vs Limits
**Note**: Currently, `Task.Resources` is used for PipelineResources. Appropriate naming for this field is still
under discussion.

### Applying Task Resources to Containers

#### Requests

As mentioned in [Resource Requirements in Kubernetes](#resource-requirements-in-kubernetes),
the effective resource requests of a pod are the sum of the resource requests of its containers,
and this value is used to determine the resources reserved by the kubelet when scheduling a pod.
Therefore, when a user configures a resource request for a Task, that request should be applied to one container only in the pod.
(See [Interaction with LimitRanges](#interaction-with-limitranges) for caveats.)
All other containers will not have resource requests. This will result in a pod with an effective resource request
that is the same as that of the Task, and will be scheduled correctly. This is similar to
Tekton’s [handling of resource requests pre v0.28.0][pre-v28], where the maximum resource request of all containers
was applied to only one container, and the rest were left without resource requests.

#### Limits

Because Kubernetes considers [containers without resource limits to have higher limits than those with limits configured][effective],
configuration for limits is more challenging than configuration for requests. Task-level resource limits could be
implemented several ways, each with their own problems:
configuration for limits is different than configuration for requests. There are several options for how Task-level resource limits
could be implemented:
- If the task resource limit is applied to only one container, the pod will not have an effective limit
due to the other containers without limits. This defeats the purpose of the feature.
- If the task limit is applied to each container, the pod has a much higher limit than desired.
This is especially problematic if requests are not set, because the request will then automatically be set
to the same value as the limit, and the pod may have difficulty being scheduled.
- If the task limit is spread out among containers, a task where one step is more resource intensive
than all the others could get oomkilled or throttled.
- Limits cannot be dynamically adjusted as steps run, since container resource requirements can't be [updated][update].
- If the task limit is applied to each container, the pod has a much higher effective limit than desired.

Therefore, this proposal will focus only on task-level resource requests, not limits.
However, the effective resource limit of a pod are not used for scheduling (see
[How Pods with resource requests are scheduled][scheduling] and [How Kubernetes applies resource requests and limits][enforcement]).
Instead, container limits are enforced by the container runtime.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: What happens when the container runtime enforces the limit? Do we currently handle that error somehow? Or does the taskrun just time out?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

working on the implementation, perhaps can help with some my understanding...

task-level limits will be applied on each step, so each step/container gets enforced with the required limits. Although the task/pod-level will have a larger summed up limits amount(as from each step), the container runtime still follows the limits over the whole process -- due to the sequentially executed steps under Tekton context.

some lines I updated in docs, perhaps help a bit also
https://github.com/tektoncd/pipeline/pull/4877/files?diff=unified&w=0#diff-5007e9db17cf9b70b930577bd627581e10e754347e92e749f5d58614456d7643R82

Copy link
Contributor

@austinzhao-go austinzhao-go May 24, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

over-limits (task-level) will be checked in a helper func under reconcile logic and throw an error to fail TaskRun

an example like the current over-limits checks for step-level as
validateTaskSpecRequestResources()
also the checks implementation

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discussed in tektoncd/pipeline#4930


### Applying Task Resources to Containers
This means that applying the task resource limits to each container in the pod will result in a pod with higher effective limits than
desired, but which prevents any individual Step from exceeding configured limits, as is likely desired.

When a user configures a resource request for a Task, that request should be applied to one container only in the pod.
(See [Interaction with LimitRanges](#interaction-with-limitranges) for caveats.)
All other containers will not have resource requests. This will result in a pod with an effective resource request
that is the same as that of the Task, and will be scheduled correctly. This is similar to
Tekton’s [handling of resource requests pre v0.28.0][pre-v28], where the maximum resource request of all containers
was applied to only one container, and the rest were left without resource requests.
If Task-level limits are set, Tekton should apply the smallest possible resource request to each container.
This is because containers with limits but not requests automatically have their requests set to their limits,
which would result in a pod with much higher effective requests than desired.

### Sidecars

Sidecar containers run in parallel with Steps, meaning that their resource requests and limits
should actually be summed with Steps’ resource requirements. However, the Task-level resource requests
should still be interpreted as the overall resource requests of a Task, including Steps and Sidecars.
Applying resource requests to a single container still results in a pod with the correct overall resource requests.
Users should not be able to specify both Task resource requirements and Sidecar resource requirements.
should actually be summed with Steps’ resource requirements. In the case of Task-level limits, it is not clear how to
distribute the limit between a Sidecar and Steps, since they run at the same time. Therefore, the Task-level resource limit
should be interpreted as the limit only for Steps, and Sidecar limits should be set separately. For consistency, Task-level
requests should also be interpreted as requests for Steps only.
Users should be able to specify both Task resource requirements and Sidecar resource requirements.

### Interaction with Step resource requirements

Because Tekton will handle the logic for the combined resource requests of a Task or TaskRun,
users should not be able to specify resource requests for both the Task or TaskRun and individual Steps.
This means:
- If a Task defines [StepTemplate.Resources.Requests][stepTemplate] or [Step.Resources.Requests][step]:
- If the Task also defines Resources.Requests, it will be rejected.
- If the corresponding TaskRun defines Resources.Requests, the value from the TaskRun will apply
- If a Task defines [StepTemplate.Resources][stepTemplate] or [Step.Resources][step]:
lbernick marked this conversation as resolved.
Show resolved Hide resolved
- If the Task also defines Resources, the Task will be rejected by the admission webhook.
- If the corresponding TaskRun defines Resources, the value from the TaskRun will apply
and the value from the Task will be ignored.
- If a Task or TaskRun defines Resources.Requests, the admission webhook should reject TaskRuns
that also define [StepOverrides.Resources.Requests][stepOverride].
- If a Task or TaskRun defines Resources, the admission webhook should reject TaskRuns
that also define [StepOverrides.Resources][stepOverride].

Users may choose to set Step resource limits in addition to Task-level resource requirements.
If both Task resource requests and Step resource limits are configured, this is permissible
as long as the maximum Step resource limit is greater than the Task resource request.
The Task resource request should be applied to the container with the greatest resource limit
(or with no limit, if such a container exists).

Resource types (CPU, memory, hugepages) should be considered independently. For example,
if a Step definition has a CPU request and the TaskRun has an overall memory request, both will be applied.
Users should not be able to mix and match Step resource requirements and Task resource requirements, even for different
types of compute resources (e.g. CPU, memory).

### Interaction with LimitRanges

Expand All @@ -192,8 +203,93 @@ If there is no container configuration that satisfies the LimitRange, the TaskRu
- Tekton pods currently have a [burstable quality of service class][qos], which will not change as a result of this implementation.
- We should consider updating our catalog Task guidelines with guidance around when to specify resource requirements.

### Future Work

We should consider deprecating `Task.Step.Resources`, `Task.StepTemplate.Resources`, and `TaskRun.StepOverrides`.
Specifying resource requirements for individual Steps is confusing and likely too granular for many CI/CD workflows.

We could also consider support for both Task-level and Step-level resource requirements if the requirements are for different types
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@austinzhao-go updated the tep to clarify

We could also consider support for both Task-level and Step-level resource requirements if the requirements are for different types
of compute resources (for example, specifying CPU request at the Step level and memory request at the Task level). However,
this functionality will not be supported by the initial implementation of this proposal; it can be added later if desired.

of compute resources (for example, specifying CPU request at the Step level and memory request at the Task level). However,
this functionality will not be supported by the initial implementation of this proposal; it can be added later if desired.

## Examples

### Example with requests only

```
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: my-task
spec:
steps:
- name: step-1
- name: step-2
- name: step-3
resources:
requests:
cpu: 1.5
```

| Step name | CPU request | CPU limit |
| --------- | ----------- | --------- |
| step-1 | N/A | N/A |
| step-2 | N/A | N/A |
| step-3 | 1.5 | N/A |

### Example with limits only

```
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: my-task
spec:
steps:
- name: step-1
- name: step-2
- name: step-3
resources:
limits:
cpu: 2
```

| Step name | CPU request | CPU limit |
| --------- | ----------------- | --------- |
| step-1 | smallest possible | 2 |
| step-2 | smallest possible | 2 |
| step-3 | smallest possible | 2 |

(Here, the smallest possible request is based on both what the kubelet will allow, and the minimum values allowed by any LimitRange.)

### Example with both requests and limits

```
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: my-task
spec:
steps:
- name: step-1
- name: step-2
- name: step-3
resources:
requests:
cpu: 1.5
limits:
cpu: 2
```

| Step name | CPU request | CPU limit |
| --------- | ----------------- | --------- |
| step-1 | smallest possible | 2 |
| step-2 | smallest possible | 2 |
| step-3 | 1.5 | 2 |

(Here, the smallest possible request is based on both what the kubelet will allow, and the minimum values allowed by any LimitRange.)


### Example with Sidecar

```
Expand All @@ -207,6 +303,11 @@ spec:
- name: step-2
sidecars:
- name: sidecar-2
resources:
requests:
cpu: 750m
limits:
cpu: 1
resources:
requests:
cpu: 1.5
Expand All @@ -218,7 +319,7 @@ The resulting pod would have the following containers:
| ----------------- | ----------- | --------- |
| step-1 | N/A | N/A |
| step-2 | 1.5 | N/A |
| sidecar-1 | N/A | N/A |
| sidecar-1 | 750m | 1 |

### Example with LimitRange

Expand Down Expand Up @@ -302,7 +403,7 @@ The resulting pod would have the following containers:
If the “Resources.Requests” field were present on the Task instead of the TaskRun,
the Task would be rejected.

### Example with Step resource requests
### Example with Step resource requests overridden by TaskRun

```
apiVersion: tekton.dev/v1beta1
Expand Down Expand Up @@ -376,35 +477,6 @@ spec:

This TaskRun would be rejected.

### Example with both requests and limits

```
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: my-task
spec:
steps:
- name: step-1
resources:
limits:
cpu: 500m
- name: step-2
resources:
limits:
cpu: 1
- name: step-3
resources:
requests:
cpu: 1.5
```

| Step name | CPU request | CPU limit |
| --------- | ----------- | --------- |
| step-1 | N/A | 500m |
| step-2 | N/A | 1 |
| step-3 | 1.5 | N/A |

### Example with both CPU and memory

```
Expand All @@ -415,26 +487,21 @@ metadata:
spec:
steps:
- name: step-1
resources:
requests:
memory: 500Mi
- name: step-2
resources:
requests:
memory: 750Mi
limits:
memory: 1Gi
resources:
requests:
cpu: 1.5
memory: 500Mi
limits:
memory: 1Gi
```

The resulting pod would have the following containers:

| Step name | CPU request | CPU limit | Memory request | Memory limit |
| --------- | ----------- | --------- | -------------- | ------------ |
| step-1 | N/A | N/A | 500Mi | N/A |
| step-2 | 1.5 | N/A | 750Mi | 1Gi |
| Step name | CPU request | CPU limit | Memory request | Memory limit |
| --------- | ----------- | --------- | ----------------- | ------------ |
| step-1 | N/A | N/A | 500Mi | 1Gi |
| step-2 | 1.5 | N/A | smallest possible | 1Gi |
lbernick marked this conversation as resolved.
Show resolved Hide resolved

## Alternatives

Expand Down Expand Up @@ -466,6 +533,8 @@ reverting to pre-0.28.0 behavior.
(for how resource requests are currently handled)

[resources]: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
[scheduling]: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-requests-are-scheduled
[enforcement]: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#how-pods-with-resource-limits-are-run
[effective]: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#resources
[update]: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources
[init]: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
Expand Down
2 changes: 1 addition & 1 deletion teps/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ This is the complete list of Tekton teps:
|[TEP-0101](0101-env-in-pod-template.md) | Env in POD template | proposed | 2022-05-09 |
|[TEP-0102](0102-https-connection-to-triggers-interceptor.md) | HTTPS Connection to Triggers ClusterInterceptor | implementable | 2022-04-20 |
|[TEP-0103](0103-skipping-reason.md) | Skipping Reason | implemented | 2022-05-05 |
|[TEP-0104](0104-tasklevel-resource-requests.md) | Task-level Resource Requests | implementable | 2022-04-08 |
|[TEP-0104](0104-tasklevel-resource-requirements.md) | Task-level Resource Requirements | implementable | 2022-05-11 |
|[TEP-0105](0105-remove-pipeline-v1alpha1-api.md) | Remove Pipeline v1alpha1 API | proposed | 2022-04-11 |
|[TEP-0106](0106-support-specifying-metadata-per-task-in-runtime.md) | Support Specifying Metadata per Task in Runtime | proposed | 2022-04-19 |
|[TEP-0107](0107-propagating-parameters.md) | Propagating Parameters | implementable | 2022-05-02 |
Expand Down