You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
curl -sfL https://get.k3s.io | \
INSTALL_K3S_EXEC="server --cluster-init" sh -
Remaining master nodes (2 for 3 total master nodes):
curl -sfL https://get.k3s.io | \
K3S_URL=https://<ip address of initial master >:6443 \
K3S_TOKEN=< token from initial master > \
INSTALL_K3S_EXEC="server --server https://<ip address of initial master >:6443" \
sh -
Finally one worker:
curl -sfL https://get.k3s.io | \
K3S_URL=https://<ip address of initial master >:6443 \
K3S_TOKEN=< token from initial master > \
sh -
Describe the bug
After everything is installed if I run a kubectl get nodes I get the expected return of the 4 nodes with three of them having the role master. However, if I shutdown the initial node, kubectl will from then on fail Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
To Reproduce
Redeploy using same commands (I'm using Ansible playbooks so I know the commands are exactly the same).
Expected behavior
Another master node takes over the role and when kubectl get nodes is run on it, it shows the 4 nodes with the original master as "notready"
Actual behavior
Receive error of timeout from kubectl on next master node when first master node is down
Additional context / logs
The text was updated successfully, but these errors were encountered:
Version:
k3s version v1.18.2+k3s1 (698e444)
K3s arguments:
First master:
Remaining master nodes (2 for 3 total master nodes):
Finally one worker:
Describe the bug
After everything is installed if I run a
kubectl get nodes
I get the expected return of the 4 nodes with three of them having the role master. However, if I shutdown the initial node, kubectl will from then on failError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
To Reproduce
Redeploy using same commands (I'm using Ansible playbooks so I know the commands are exactly the same).
Expected behavior
Another master node takes over the role and when kubectl get nodes is run on it, it shows the 4 nodes with the original master as "notready"
Actual behavior
Receive error of timeout from kubectl on next master node when first master node is down
Additional context / logs
The text was updated successfully, but these errors were encountered: