-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug 1801692: Sync with upstream #29
bug 1801692: Sync with upstream #29
Conversation
fix broken link :
…rategy Compared to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity the strategy kinda implements requiredDuringSchedulingRequiredDuringExecution node affinity type for kubelets. Only addition to kubelet is the strategy checks whether is at least another node capable of respecting the node affinity rules. When requiredDuringSchedulingRequiredDuringExecution node affinity type is implemented in kubelet, it's likely the strategy either gets removed or re-implemented. Stressing the relation with requiredDuringSchedulingRequiredDuringExecution will helps consumers of descheduler to keep in mind the kubelet will eventually take over the strategy when implemented.
…iolating-node-affinity readme: RemovePodsViolatingNodeAffinity: reword description of the strategy
Only over utilized nodes need clasification of pods into categories. Thus, skipping categorizing of pods which saves computation time in cases where the number of over utilized nodes makes less than 50% of all nodes or their fraction.
Unit test refactored node utilization and pod clasification
…re/v1.Toleration.TolerateTaint Functionally identical implementation of toleratesTaint is already provided in k8s.io/api
… already implemented in utils.TolerationsTolerateTaint
…erationsTolerateTaintsWithFilter
…-priority-only-over-over-utilized-nodes Order pods by priority only over over utilized nodes
…checkPodsSatisfyTolerations
…ains-code-cleanup Tolerations taints code cleanup
Each strategy implements a test for checking if a maximum number of pods per node was already evicted. The test duplicates a code that can be put under a single invocation. Thus, reducing the number of arguments passed to each strategy given EvicPod call can encapsulate both the check and the invocation of the pod eviction itself.
Move maximum-pods-per-nodes-evicted logic under a single invocation
Add RemoveTooManyRestarts policy
…a pointer The field is intended to be omitempty when not set. Without a pointer the strategy serialized into json string looks like: ```json strategies: LowNodeUtilization: enabled: true params: nodeResourceUtilizationThresholds: numberOfNodes: 1 targetThresholds: cpu: 50 memory: 50 pods: 20 thresholds: cpu: 50 memory: 50 pods: 20 RemoveDuplicates: enabled: true params: nodeResourceUtilizationThresholds: {} RemovePodsViolatingInterPodAntiAffinity: enabled: true params: nodeResourceUtilizationThresholds: {} RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution nodeResourceUtilizationThresholds: {} RemovePodsViolatingNodeTaints: enabled: true params: nodeResourceUtilizationThresholds: {} ``` It's preferred to have the following json string instead: ``` strategies: LowNodeUtilization: enabled: true params: nodeResourceUtilizationThresholds: numberOfNodes: 1 targetThresholds: cpu: 50 memory: 50 pods: 20 thresholds: cpu: 50 memory: 50 pods: 20 RemoveDuplicates: enabled: true RemovePodsViolatingInterPodAntiAffinity: enabled: true RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution RemovePodsViolatingNodeTaints: enabled: true ```
…-params-NodeResourceUtilizationThresholds-field-into-pointer Turn StrategyParameters.NodeResourceUtilizationThresholds field into a pointer
@ingvagabund: This pull request references Bugzilla bug 1801692, which is valid. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…lerServer-from-strategies Drop descheduler server from strategies
76b9874
to
f2c3954
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Thanks @ingvagabund!
@ingvagabund: An error was encountered searching for bug 1801692 on the Bugzilla server at https://bugzilla.redhat.com:
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Picking kubernetes-sigs#240