-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKSA bare metal cluster scale-in doesn't honor new hardware.csv file #8190
Comments
This is not the right upgrade command eksctl anywhere upgrade cluster -f cluster.yaml
# --hardware-csv <hardware.csv> \ # uncomment to add more hardware
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig You should pass in the cluster spec to the |
@jiayiwang7 sorry my bad, I already update the description. Here was my original command: |
Hi @ygao-armada |
This issue has been resolved in our latest patch release v0.19.7 |
I am still seeing this issue in our Bare-metal setup EKS-A version:
Before starting scale in, I have 2 worker nodes
Then I have edited my hardware csv file to remove the instance-530 worker node
I have also adjusted my cluster config file to scale the worker node count to 1 and then ran the command
However, I still see EKS Anywhere does not delete the node that I had removed from hardware csv. Instead, it starts deleting the other node.
Per the logic of what I understand has been fixed, instance-530 should have been deleted as it was removed from hardware csv. However after the scale in upgrade, the other node is deleted
Can someone help????????? |
What happened:
In EKSA bare metal cluster, I try to scale-down the cluster by 1 worker node and remove a specific worker node from hardware.csv file. And run following command:
eksctl anywhere upgrade cluster -f eksa-new.yaml --hardware-csv hardware-new.csv
However, it turns out the node to remove may not be the desired one.
What you expected to happen:
The desired work node is removed.
How to reproduce it (as minimally and precisely as possible):
create an EKSA bare metal cluster with 2 worker nodes
Anything else we need to know?:
Environment:
The text was updated successfully, but these errors were encountered: