"Pod fits on node" that has lower utilization than current node #1461
Labels
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
What version of descheduler are you using?
descheduler version: 0.30.1
Does this issue reproduce with the latest release?
Yes
Which descheduler CLI options are you using?
--v7
--dry-run
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
Configured the scheduler of the cluster to use
MostAllocated
Deployed descheduler with the above
HighNodeUtilization
policyWhat did you expect to see?
With the current policies, I expect pods on nodes < 70% memory / CPU usage to get descheduled if there is room on another node with higher usage.
What did you see instead?
Some pods are sometimes descheduled when the only other node they can fit on has lower resource usage, resulting in an endless loop of descheduling the pods (and having them rescheduled on the same node since their usage is higher)
Here are the truncated logs:
We can see that the pod is currently scheduled on node
ip-x-x-x-47.us-east-1.compute.internal
with a utilization of{"cpu":64.29,"memory":52.14,"pods":20.69}
and the descheduler considers that it can fit onip-x-x-x-63.us-east-1.compute.internal
with a (lower) utilization of{"cpu":5.61,"memory":2.37,"pods":10.34}
The pod is then descheduled, and because the scheduler is configured with the
MostAllocated
option, the pod gets scheduled onip-x-x-x-47.us-east-1.compute.internal
again.The text was updated successfully, but these errors were encountered: