- Release Signoff Checklist
- Summary
- Motivation
- Proposal
- Design Details
- Production Readiness Review Questionnaire
- Implementation History
- Drawbacks
- Infrastructure Needed (Optional)
Items marked with (R) are required prior to targeting to a milestone / release.
- (R) Enhancement issue in release milestone, which links to KEP dir in kubernetes/enhancements (not the initial KEP PR)
- (R) KEP approvers have approved the KEP status as
implementable
- (R) Design details are appropriately documented
- (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
- e2e Tests for all Beta API Operations (endpoints)
- (R) Ensure GA e2e tests meet requirements for Conformance Tests
- (R) Minimum Two Week Window for GA e2e tests to prove flake free
- (R) Graduation criteria is in place
- (R) all GA Endpoints must be hit by Conformance Tests
- (R) Production readiness review completed
- (R) Production readiness review approved
- "Implementation History" section is up-to-date for milestone
- User-facing documentation has been created in kubernetes/website, for publication to kubernetes.io
- Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc. This will enable kubelet to report node conditions which will be utilized to prevent scheduling of pods on nodes experiencing significant resource constraints.
PSI metric provides a quantifiable way to see resource pressure increases as they develop, with a new pressure metric for three major resources (memory, CPU, IO). These pressure metrics are useful for detecting resource shortages and provide nodes the opportunity to respond intelligently - by updating the node condition.
In short, PSI metric are like barometers that provide fair warning of impending resource shortages on the node, and enable nodes to take more proactive, granular and nuanced steps when major resources (memory, CPU, IO) start becoming scarce.
This proposal aims to:
- Enable the kubelet to have the PSI metric of cgroupv2 exposed from cAdvisor and Runc.
- Enable the pod level PSI metric and expose it in the Summary API.
- Utilize the node level PSI metric to set node condition and node taints.
It will have two phases: Phase 1: includes goal 1, 2 Phase 2: includes goal 3
- Invest in more opportunities to further use PSI metric for pod evictions, userspace OOM kills, and so on, for future KEPs.
Today, to identify disruptions caused by resource crunches, Kubernetes users need to install node exporter to read PSI metric. With the feature proposed in this enhancement, PSI metric will be available for users in the Kubernetes metrics API.
Kubernetes users want to prevent new pods to be scheduled on the nodes that have resource starvation. By using PSI metric, the kubelet will set Node Condition to avoid pods being scheduled on nodes under high resource pressure. The node controller could then set a taint on the node based on these new Node Conditions.
There are no significant risks associated with Phase 1 implementation that involves integrating the PSI metric in kubelet from either from cadvisor runc libcontainer library or kubelet's CRI runc libcontainer implementation which doesn't involve any shelled binary operations.
Phase 2 involves utilizing the PSI metric to report node conditions. There is a potential risk of early reporting for nodes under pressure. We intend to address this concern by conducting careful experimentation with PSI threshold values to identify the optimal default threshold to be used for reporting the nodes under heavy resource pressure.
- Add new Data structures PSIData and PSIStats corresponding to the PSI metric output format as following:
some avg10=0.00 avg60=0.00 avg300=0.00 total=0
full avg10=0.00 avg60=0.00 avg300=0.00 total=0
type PSIData struct {
Avg10 *float64 `json:"avg10"`
Avg60 *float64 `json:"avg60"`
Avg300 *float64 `json:"avg300"`
Total *float64 `json:"total"`
}
type PSIStats struct {
Some *PSIData `json:"some,omitempty"`
Full *PSIData `json:"full,omitempty"`
}
- Summary API includes stats for both system and kubepods level cgroups. Extend the Summary API to include PSI metric data for each resource obtained from cadvisor. Note: if cadvisor-less is implemented prior to the implementation of this enhancement, the PSI metric data will be available through CRI instead.
type CPUStats struct {
// PSI stats of the overall node
PSI cadvisorapi.PSIStats `json:"psi,omitempty"`
}
type MemoryStats struct {
// PSI stats of the overall node
PSI cadvisorapi.PSIStats `json:"psi,omitempty"`
}
// IOStats contains data about IO usage.
type IOStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// PSI stats of the overall node
PSI cadvisorapi.PSIStats `json:"psi,omitempty"`
}
type NodeStats struct {
// Stats about the IO pressure of the node
IO *IOStats `json:"io,omitempty"`
}
Note: These actions are tentative, and will depend on different the outcome from testing and discussions with sig-node members, users, and other folks.
-
Introduce a new kubelet config parameter, pressure threshold, to let users specify the pressure percentage beyond which the kubelet would report the node condition to disallow workloads to be scheduled on it.
-
Add new node conditions corresponding to high PSI (beyond threshold levels) on CPU, Memory and IO.
// These are valid conditions of the node. Currently, we don't have enough information to decide
// node condition.
const (
…
// Conditions based on pressure at system level cgroup.
NodeSystemCPUContentionPressure NodeConditionType = "SystemCPUContentionPressure"
NodeSystemMemoryContentionPressure NodeConditionType = "SystemMemoryContentionPressure"
NodeSystemDiskContentionPressure NodeConditionType = "SystemDiskContentionPressure"
// Conditions based on pressure at kubepods level cgroup.
NodeKubepodsCPUContentionPressure NodeConditionType = "KubepodsCPUContentionPressure"
NodeKubepodsMemoryContentionPressure NodeConditionType = "KubepodsMemoryContentionPressure"
NodeKubepodsDiskContentionPressure NodeConditionType = "KubepodsDiskContentionPressure"
)
- Kernel collects PSI data for 10s, 60s and 300s timeframes. To determine the optimal observation timeframe, it is necessary to conduct tests and benchmark performance. In theory, 10s interval might be rapid to taint a node with NoSchedule effect. Therefore, as an initial approach, opting for a 60s timeframe for observation logic appears more appropriate.
Add the observation logic to add node condition and taint as per following scenarios:
- If avg60 >= threshold, then record an event indicating high resource pressure.
- If avg60 >= threshold and is trending higher i.e. avg10 >= threshold, then set Node Condition for high resource contention pressure. This should ensure no new pods are scheduled on the nodes under heavy resource contention pressure.
- If avg60 >= threshold for a node tainted with NoSchedule effect, and is trending lower i.e. avg10 <= threshold, record an event mentioning the resource contention pressure is trending lower.
- If avg60 < threshold for a node tainted with NoSchedule effect, remove the NodeCondition.
- Collaborate with sig-scheduling to modify TaintNodesByCondition feature to integrate new taints for the new Node Conditions introduced in this enhancement.
node.kubernetes.io/memory-contention-pressure=:NoSchedule
node.kubernetes.io/cpu-contention-pressure=:NoSchedule
node.kubernetes.io/disk-contention-pressure=:NoSchedule
-
Perform experiments to finalize the default optimal pressure threshold value.
-
Add a new feature gate PSINodeCondition, and guard the node condition related logic behind the feature gate. Set
--feature-gates=PSINodeCondition=true
to enable the feature.
[X] I/we understand the owners of the involved components may require updates to existing tests to make this code solid enough prior to committing the changes necessary to implement this enhancement.
k8s.io/kubernetes/pkg/kubelet/server/stats
:2023-10-04
-74.4%
Any identified external user of either of these endpoints (prometheus, metrics-server) should be tested to make sure they're not broken by new fields in the API response.
- :
- PSI integrated in kubelet behind a feature flag.
- Unit tests to check the fields are populated in the Summary API response.
- Implement Phase 2 of the enhancement which enables kubelet to report node conditions based off PSI values.
- Initial e2e tests completed and enabled if CRI implementation supports it.
- Add documentation for the feature.
- Feature gate is enabled by default.
- Extend e2e test coverage.
- Allowing time for feedback.
- TBD
- Announce deprecation and support policy of the existing flag
- Two versions passed since introducing the functionality that deprecates the flag (to address version skew)
- Address feedback on usage/changed behavior, provided on GitHub issues
- Deprecate the flag -->
No impact. Runc will be upgraded to 1.2.0 version as a prerequisite for this feature, and all the other components will already be at expected levels. Hence there shouldn't be a problem in upgrading or downgrading. Besides, it's always possible to upgrade/downgrade to a different kubelet version.
N/A
PSI stats will be available only after CRI and cadvisor have been updated to use runc 1.2.0
in K8s 1.29. Since PSI Based Node Conditions
is dependent on kubelet version, and CRI and kubelet are generally updated in tandem, Version skew strategy is not applicable.
- Feature gate (also fill in values in
kep.yaml
)- Feature gate name: PSINodeCondition
- Components depending on the feature gate: kubelet
- Other
- Describe the mechanism:
- Will enabling / disabling the feature require downtime of the control plane?
- Will enabling / disabling the feature require downtime or reprovisioning of a node?
Not in Phase 1. Phase 2 is TBD in K8s 1.31.
Yes
When the feature is disabled, the Node Conditions will still exist on the nodes. However, they won't be any consumers of these node conditions. When the feature is re-enabled, the kubelet will override out of date Node Conditions as expected.
Unit tests
Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
For Phase 1:
Use kubectl get --raw "/api/v1/nodes/{$nodeName}/proxy/stats/summary"
to call Summary API. If the PSIStats field is seen in the API response,
the feature is available to be used by workloads.
For Phase 2: TBD
- Events
- Event Reason:
- API .status
- Condition name:
- Other field:
- Other (treat as last resort)
- Details:
What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
- Metrics
- Metric name:
- [Optional] Aggregation method:
- Components exposing the metric:
- Other (treat as last resort)
- Details:
Are there any missing metrics that would be useful to have to improve observability of this feature?
Yes, it depends on runc version 1.2.0. This KEP can be implemented only after runc 1.2.0 is released, which is estimated to be released in Q1 2024.
No
Yes, PSIStats is the new API type that will be added to Summary API.
No
No
Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
No
Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
No. Additional metric i.e. PSI is being read from cadvisor.
Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
No
NA
- NA.
NA
- 2023/09/13: Initial proposal
No drawbacks in Phase 1 identified. There's no reason the enhancement should not be implemented. This enhancement now makes it possible to read PSI metric without installing additional dependencies
No new infrastructure is needed.