Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug 1801692: Sync with upstream #29

Merged
merged 24 commits into from
Apr 28, 2020

Conversation

ingvagabund
Copy link
Member

CriaHu and others added 22 commits April 6, 2020 09:28
…rategy

Compared to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity
the strategy kinda implements requiredDuringSchedulingRequiredDuringExecution node affinity type
for kubelets. Only addition to kubelet is the strategy checks whether is at least another node
capable of respecting the node affinity rules.

When requiredDuringSchedulingRequiredDuringExecution node affinity type is implemented in kubelet,
it's likely the strategy either gets removed or re-implemented. Stressing the relation with
requiredDuringSchedulingRequiredDuringExecution will helps consumers of descheduler
to keep in mind the kubelet will eventually take over the strategy when implemented.
…iolating-node-affinity

readme: RemovePodsViolatingNodeAffinity: reword description of the strategy
Only over utilized nodes need clasification of pods into categories.
Thus, skipping categorizing of pods which saves computation time in cases
where the number of over utilized nodes makes less than 50% of all nodes
or their fraction.
Unit test refactored node utilization and pod clasification
…re/v1.Toleration.TolerateTaint

Functionally identical implementation of toleratesTaint is already provided in k8s.io/api
… already implemented in utils.TolerationsTolerateTaint
…-priority-only-over-over-utilized-nodes

Order pods by priority only over over utilized nodes
…ains-code-cleanup

Tolerations taints code cleanup
Each strategy implements a test for checking if a maximum number of pods per node
was already evicted. The test duplicates a code that can be put
under a single invocation. Thus, reducing the number of arguments passed
to each strategy given EvicPod call can encapsulate both the check
and the invocation of the pod eviction itself.
Move maximum-pods-per-nodes-evicted logic under a single invocation
…a pointer

The field is intended to be omitempty when not set. Without a pointer the strategy
serialized into json string looks like:

```json
strategies:
  LowNodeUtilization:
    enabled: true
    params:
      nodeResourceUtilizationThresholds:
        numberOfNodes: 1
        targetThresholds:
          cpu: 50
          memory: 50
          pods: 20
        thresholds:
          cpu: 50
          memory: 50
          pods: 20
  RemoveDuplicates:
    enabled: true
    params:
      nodeResourceUtilizationThresholds: {}
  RemovePodsViolatingInterPodAntiAffinity:
    enabled: true
    params:
      nodeResourceUtilizationThresholds: {}
  RemovePodsViolatingNodeAffinity:
    enabled: true
    params:
      nodeAffinityType:
      - requiredDuringSchedulingIgnoredDuringExecution
      nodeResourceUtilizationThresholds: {}
  RemovePodsViolatingNodeTaints:
    enabled: true
    params:
      nodeResourceUtilizationThresholds: {}
```

It's preferred to have the following json string instead:
```
strategies:
  LowNodeUtilization:
    enabled: true
    params:
      nodeResourceUtilizationThresholds:
        numberOfNodes: 1
        targetThresholds:
          cpu: 50
          memory: 50
          pods: 20
        thresholds:
          cpu: 50
          memory: 50
          pods: 20
  RemoveDuplicates:
    enabled: true
  RemovePodsViolatingInterPodAntiAffinity:
    enabled: true
  RemovePodsViolatingNodeAffinity:
    enabled: true
    params:
      nodeAffinityType:
      - requiredDuringSchedulingIgnoredDuringExecution
  RemovePodsViolatingNodeTaints:
    enabled: true
```
…-params-NodeResourceUtilizationThresholds-field-into-pointer

Turn StrategyParameters.NodeResourceUtilizationThresholds field into a pointer
@openshift-ci-robot openshift-ci-robot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Apr 28, 2020
@openshift-ci-robot
Copy link

@ingvagabund: This pull request references Bugzilla bug 1801692, which is valid. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.5.0) matches configured target release for branch (4.5.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

bug 1801692: Sync with upstream

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot and others added 2 commits April 28, 2020 08:34
…lerServer-from-strategies

Drop descheduler server from strategies
Copy link

@damemi damemi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
Thanks @ingvagabund!

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 28, 2020
@openshift-merge-robot openshift-merge-robot merged commit 1b488f0 into openshift:master Apr 28, 2020
@openshift-ci-robot
Copy link

@ingvagabund: An error was encountered searching for bug 1801692 on the Bugzilla server at https://bugzilla.redhat.com:

Get https://bugzilla.redhat.com/rest/bug/1801692?api_key=CENSORED: dial tcp: i/o timeout
Please contact an administrator to resolve this issue, then request a bug refresh with /bugzilla refresh.

In response to this:

bug 1801692: Sync with upstream

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants