-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcdctl: fix member add (again...) #11638
Conversation
Use members information from member add response, which is guaranteed to be up to date.
5268be9
to
8f5ce68
Compare
LGTM |
…8-upstream-release-3.3 Automated cherry pick of #11638 on release-3.3
…8-upstream-release-3.4 Automated cherry pick of #11638 on release-3.4
{"level":"warn","ts":"2020-07-15T05:45:40.089+0100","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://ipmasked:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"} It is happening when we add stacked kubernetes masters based on instruction at kubernetes.io. It is a second master node. The etcdctl member status and list are showing correctly with node 1 as master and node 2 as false however when we down the master 1, the entire cluster is going downe. Kubernetes version 1.18.5 and etcd version: 3.4.3 +------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ |
@eselvam I am not sure I follow you comment. Are you trying to add a new node to a 2-node cluster with 1 of the nodes down? |
I just realized we could simply use the members information in member add response, which is guaranteed to be up to date.
We tried to fix
etcdctl member add
in #11194. But it does not solve the problem if the client balancer is provided with 2 endpoints and balancer is doing round robin.Fixes #11554