Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd state stalls control plane bootstrapping on 3rd node #2591

Closed
randomvariable opened this issue Mar 9, 2020 · 25 comments
Closed

etcd state stalls control plane bootstrapping on 3rd node #2591

randomvariable opened this issue Mar 9, 2020 · 25 comments
Assignees
Labels
area/control-plane Issues or PRs related to control-plane lifecycle management kind/bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@randomvariable
Copy link
Member

What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
When specifying a 3-node control plane, in certain infrastructures, the 3rd machine bootstrap fails.

First reported by @CecileRobertMichon:

[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
{"level":"warn","ts":"2020-03-06T20:41:26.445Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.0.0.5:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
error execution phase control-plane-join/update-status: error uploading configuration: unable to create ConfigMap: rpc error: code = Unavailable desc = etcdserver: leader changed

with KCP status:

  Replicas:                  3
  Version:                   1.16.7
Status:
  Initialized:           true
  Ready:                 true
  Ready Replicas:        1
  Replicas:              2
  Selector:              cluster.x-k8s.io/cluster-name=capz-0,cluster.x-k8s.io/control-plane=
  Unavailable Replicas:  1
  Updated Replicas:      2

What did you expect to happen:
3 node cluster bootstraps successfully

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Likely to be related to misdiagnosed kubernetes-sigs/cluster-api-provider-vsphere#782

Environment:

  • Cluster-api version: 0.3.0-rc.2
  • Minikube/KIND version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release): 1.17.3

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 9, 2020
@randomvariable
Copy link
Member Author

Taking a look on CAPV side.

@randomvariable randomvariable changed the title etcd sync stalls control plane bootstrapping on secondary nodes etcd sync stalls control plane bootstrapping on 3rd node Mar 9, 2020
@randomvariable randomvariable changed the title etcd sync stalls control plane bootstrapping on 3rd node etcd state stalls control plane bootstrapping on 3rd node Mar 9, 2020
@fabriziopandini
Copy link
Member

is it possible to get etcd logs (from all the 3 members)?

@vincepri
Copy link
Member

vincepri commented Mar 9, 2020

Is it possible to test with rc.3?

@CecileRobertMichon
Copy link
Contributor

@vincepri I was testing with rc.3

@johnlam90
Copy link

@vincepri seeing this on rc.3

@CecileRobertMichon
Copy link
Contributor

CecileRobertMichon commented Mar 9, 2020

From the capi controller-manager:

kubectl describe deploy/capi-controller-manager -n capi-system | grep rc                    0|1 ↵  11501  14:11:51
    Image:      us.gcr.io/k8s-artifacts-prod/cluster-api/cluster-api-controller:v0.3.0-rc.3

@detiber
Copy link
Member

detiber commented Mar 9, 2020

The sync delays and errors for reads taking too long are a bit worrisome. What type of disks are backing the control plane instances? What does the networking look like between the instances?

@neolit123
Copy link
Member

RAID controllers?
kubernetes/kubernetes#88111 (comment)

@johnlam90
Copy link

Tried about 5 times today, got it working once where it deploys all 3 masters, but here are the logs for the failures.

First control plane mycluster-b555r.log

Second control plane mycluster-xts2n.log

Environment:
vsphere: 6.7U3
OS: Ubuntu 1.15.4
CAPI version: rc3

@vincepri
Copy link
Member

vincepri commented Mar 9, 2020

@johnlam90 What kubernetes version are you using out of curiosity?

@johnlam90
Copy link

@vincepri 1.16.7

@detiber
Copy link
Member

detiber commented Mar 10, 2020

This error on the second control plane is a bit concerning:

error execution phase control-plane-join/update-status: error uploading configuration: unable to create RBAC role: Post https://10.46.85.254:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: unexpected EOF

It looks like either the load balancer or the api-server for the first control plane Machine had an issue where it severed the connection early during that call.

Could you provide additional logs from the host?

Of particular interest would be the kubelet, api-server, and etcd logs for both the first and second control plane Machines.

@randomvariable
Copy link
Member Author

randomvariable commented Mar 10, 2020

Checking in CAPV, I'm seeing different errors more frequently, and I'm still suspecting the HAProxyLoadBalancer.

However, I can reproduce the original problem, repeatedly, in one situation:
Where the CNI pod network and host network overlap, at least for Calico 1.12, once the CNI pod is active, inter-host routing is dead.

The steps for 100% reproducability are:

  1. Instantiate 1 node KCP workload cluster within a 192.168.0.0/16 network
  2. Apply Calico with the default manifest to workload cluster
  3. Scale the replica size

and you can get it most of the time by applying calico at any point. etcd leadership will move around until the etcd cluster is finally dead. These will be logged in the api server and etcd logs.

Tests done on rc3, K8s 1.17.3

@randomvariable
Copy link
Member Author

randomvariable commented Mar 10, 2020

error execution phase control-plane-join/update-status: error uploading configuration: unable to create RBAC role: Post https://10.46.85.254:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: unexpected EOF

is the error I see more often in CAPV, but suspect HAProxy config as the version of the Dataplane API there does not support scraping /healthz whilst doing TCP pass through:

for completeness, the logs will look like something like:

[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-03-10T06:52:50.259Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://192.168.192.158:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
error execution phase control-plane-join/update-status: error uploading configuration: Get https://192.168.192.138:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s: unexpected EOF
To see the stack trace of this error execute with --v=5 or higher

etcd logs on initial node:


2020-03-10 06:50:56.925875 I | etcdmain: etcd Version: 3.4.3
2020-03-10 06:50:56.925971 I | etcdmain: Git SHA: 3cf2f69b5
2020-03-10 06:50:56.925974 I | etcdmain: Go Version: go1.12.12
2020-03-10 06:50:56.925975 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-10 06:50:56.925977 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-10 06:50:56.926026 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-10 06:50:56.933448 I | embed: name = test-4cgd4.domain.local
2020-03-10 06:50:56.933457 I | embed: data dir = /var/lib/etcd
2020-03-10 06:50:56.933460 I | embed: member dir = /var/lib/etcd/member
2020-03-10 06:50:56.933462 I | embed: heartbeat = 100ms
2020-03-10 06:50:56.933463 I | embed: election = 1000ms
2020-03-10 06:50:56.933465 I | embed: snapshot count = 10000
2020-03-10 06:50:56.933472 I | embed: advertise client URLs = https://192.168.192.163:2379
2020-03-10 06:50:57.621015 I | etcdserver: starting member caadd71e072faa6b in cluster c02cadd2631ba338
raft2020/03/10 06:50:57 INFO: caadd71e072faa6b switched to configuration voters=()
raft2020/03/10 06:50:57 INFO: caadd71e072faa6b became follower at term 0
raft2020/03/10 06:50:57 INFO: newRaft caadd71e072faa6b [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/03/10 06:50:57 INFO: caadd71e072faa6b became follower at term 1
raft2020/03/10 06:50:57 INFO: caadd71e072faa6b switched to configuration voters=(14604565690601941611)
2020-03-10 06:50:57.700824 W | auth: simple token is not cryptographically signed
2020-03-10 06:50:57.703763 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-03-10 06:50:57.715074 I | etcdserver: caadd71e072faa6b as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/03/10 06:50:57 INFO: caadd71e072faa6b switched to configuration voters=(14604565690601941611)
2020-03-10 06:50:57.842956 I | etcdserver/membership: added member caadd71e072faa6b [https://192.168.192.163:2380] to cluster c02cadd2631ba338
2020-03-10 06:50:57.879109 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-10 06:50:57.879314 I | embed: listening for metrics on http://127.0.0.1:2381
2020-03-10 06:50:57.879386 I | embed: listening for peers on 192.168.192.163:2380
raft2020/03/10 06:50:58 INFO: caadd71e072faa6b is starting a new election at term 1
raft2020/03/10 06:50:58 INFO: caadd71e072faa6b became candidate at term 2
raft2020/03/10 06:50:58 INFO: caadd71e072faa6b received MsgVoteResp from caadd71e072faa6b at term 2
raft2020/03/10 06:50:58 INFO: caadd71e072faa6b became leader at term 2
raft2020/03/10 06:50:58 INFO: raft.node: caadd71e072faa6b elected leader caadd71e072faa6b at term 2
2020-03-10 06:50:58.158415 I | etcdserver: published {Name:test-4cgd4.domain.local ClientURLs:[https://192.168.192.163:2379]} to cluster c02cadd2631ba338
2020-03-10 06:50:58.182368 I | etcdserver: setting up the initial cluster version to 3.4
2020-03-10 06:50:58.194346 I | embed: ready to serve client requests
2020-03-10 06:50:58.210541 I | embed: serving client requests on 127.0.0.1:2379
2020-03-10 06:50:58.210580 I | embed: ready to serve client requests
2020-03-10 06:50:58.211114 I | embed: serving client requests on 192.168.192.163:2379
2020-03-10 06:50:58.358716 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-10 06:50:58.358847 I | etcdserver/api: enabled capabilities for version 3.4
2020-03-10 06:51:00.566653 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (126.369187ms) to execute
raft2020/03/10 06:52:40 INFO: caadd71e072faa6b switched to configuration voters=(9910999429085204018 14604565690601941611)
2020-03-10 06:52:40.679639 I | etcdserver/membership: added member 898af17853b90e32 [https://192.168.192.158:2380] to cluster c02cadd2631ba338
2020-03-10 06:52:40.679658 I | rafthttp: starting peer 898af17853b90e32...
2020-03-10 06:52:40.679670 I | rafthttp: started HTTP pipelining with peer 898af17853b90e32
2020-03-10 06:52:40.681956 I | rafthttp: started peer 898af17853b90e32
2020-03-10 06:52:40.682376 I | rafthttp: started streaming with peer 898af17853b90e32 (writer)
2020-03-10 06:52:40.682388 I | rafthttp: started streaming with peer 898af17853b90e32 (writer)
2020-03-10 06:52:40.682413 I | rafthttp: started streaming with peer 898af17853b90e32 (stream MsgApp v2 reader)
2020-03-10 06:52:40.682631 I | rafthttp: started streaming with peer 898af17853b90e32 (stream Message reader)
2020-03-10 06:52:40.682798 I | rafthttp: added peer 898af17853b90e32
raft2020/03/10 06:52:41 WARN: caadd71e072faa6b stepped down to follower since quorum is not active
raft2020/03/10 06:52:41 INFO: caadd71e072faa6b became follower at term 2
raft2020/03/10 06:52:41 INFO: raft.node: caadd71e072faa6b lost leader caadd71e072faa6b at term 2
raft2020/03/10 06:52:43 INFO: caadd71e072faa6b is starting a new election at term 2
raft2020/03/10 06:52:43 INFO: caadd71e072faa6b became candidate at term 3
raft2020/03/10 06:52:43 INFO: caadd71e072faa6b received MsgVoteResp from caadd71e072faa6b at term 3
raft2020/03/10 06:52:43 INFO: caadd71e072faa6b [logterm: 2, index: 747] sent MsgVote request to 898af17853b90e32 at term 3
2020-03-10 06:52:43.294205 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (2.000017966s) to execute
WARNING: 2020/03/10 06:52:43 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
raft2020/03/10 06:52:44 INFO: caadd71e072faa6b is starting a new election at term 3
raft2020/03/10 06:52:44 INFO: caadd71e072faa6b became candidate at term 4
raft2020/03/10 06:52:44 INFO: caadd71e072faa6b received MsgVoteResp from caadd71e072faa6b at term 4
raft2020/03/10 06:52:44 INFO: caadd71e072faa6b [logterm: 2, index: 747] sent MsgVote request to 898af17853b90e32 at term 4
2020-03-10 06:52:45.683018 W | rafthttp: health check for peer 898af17853b90e32 could not connect: dial tcp 192.168.192.158:2380: connect: connection refused
2020-03-10 06:52:45.683052 W | rafthttp: health check for peer 898af17853b90e32 could not connect: dial tcp 192.168.192.158:2380: connect: connection refused
raft2020/03/10 06:52:46 INFO: caadd71e072faa6b is starting a new election at term 4
raft2020/03/10 06:52:46 INFO: caadd71e072faa6b became candidate at term 5
raft2020/03/10 06:52:46 INFO: caadd71e072faa6b received MsgVoteResp from caadd71e072faa6b at term 5
raft2020/03/10 06:52:46 INFO: caadd71e072faa6b [logterm: 2, index: 747] sent MsgVote request to 898af17853b90e32 at term 5
raft2020/03/10 06:52:47 INFO: caadd71e072faa6b is starting a new election at term 5
raft2020/03/10 06:52:47 INFO: caadd71e072faa6b became candidate at term 6
raft2020/03/10 06:52:47 INFO: caadd71e072faa6b received MsgVoteResp from caadd71e072faa6b at term 6
raft2020/03/10 06:52:47 INFO: caadd71e072faa6b [logterm: 2, index: 747] sent MsgVote request to 898af17853b90e32 at term 6
2020-03-10 06:52:47.689536 W | etcdserver: timed out waiting for read index response (local node might have slow network)
2020-03-10 06:52:47.689582 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "error:etcdserver: request timed out" took too long (7.00010527s) to execute
2020-03-10 06:52:48.349866 I | rafthttp: peer 898af17853b90e32 became active
2020-03-10 06:52:48.349883 I | rafthttp: established a TCP streaming connection with peer 898af17853b90e32 (stream Message reader)
2020-03-10 06:52:48.349983 I | rafthttp: established a TCP streaming connection with peer 898af17853b90e32 (stream MsgApp v2 reader)
2020-03-10 06:52:48.354839 I | rafthttp: established a TCP streaming connection with peer 898af17853b90e32 (stream Message writer)
2020-03-10 06:52:48.360337 I | rafthttp: established a TCP streaming connection with peer 898af17853b90e32 (stream MsgApp v2 writer)
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b is starting a new election at term 6
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b became candidate at term 7
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b received MsgVoteResp from caadd71e072faa6b at term 7
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b [logterm: 2, index: 747] sent MsgVote request to 898af17853b90e32 at term 7
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b received MsgVoteResp from 898af17853b90e32 at term 7
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b has received 2 MsgVoteResp votes and 0 vote rejections
raft2020/03/10 06:52:48 INFO: caadd71e072faa6b became leader at term 7
raft2020/03/10 06:52:48 INFO: raft.node: caadd71e072faa6b elected leader caadd71e072faa6b at term 7
2020-03-10 06:52:48.989103 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "error:etcdserver: leader changed" took too long (2.514265138s) to execute
2020-03-10 06:52:48.989212 W | etcdserver: read-only range request "key:\"/registry/minions/test-nww2j.domain.local\" " with result "error:etcdserver: leader changed" took too long (6.840824012s) to execute
2020-03-10 06:52:48.989275 W | etcdserver: read-only range request "key:\"/registry/networkpolicies\" range_end:\"/registry/networkpoliciet\" count_only:true " with result "error:etcdserver: leader changed" took too long (6.868147989s) to execute
2020-03-10 06:52:48.989313 W | etcdserver: read-only range request "key:\"/registry/resourcequotas\" range_end:\"/registry/resourcequotat\" count_only:true " with result "error:etcdserver: leader changed" took too long (3.419603095s) to execute
2020-03-10 06:52:48.989357 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "error:etcdserver: leader changed" took too long (3.521135266s) to execute
2020-03-10 06:52:48.989414 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/cloud-controller-manager\" " with result "error:etcdserver: leader changed" took too long (8.11241886s) to execute
2020-03-10 06:52:48.989453 W | etcdserver: read-only range request "key:\"/registry/clusterroles\" range_end:\"/registry/clusterrolet\" count_only:true " with result "error:etcdserver: leader changed" took too long (4.849528182s) to execute
2020-03-10 06:52:48.989471 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:etcdserver: leader changed" took too long (5.316568165s) to execute
2020-03-10 06:52:48.989509 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "error:etcdserver: leader changed" took too long (5.347262093s) to execute
2020-03-10 06:52:48.989549 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/test-md-0-7bf7c858b4-tgrnj.domain.local\" " with result "error:etcdserver: leader changed" took too long (5.367377309s) to execute
2020-03-10 06:52:48.989573 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims\" range_end:\"/registry/persistentvolumeclaimt\" count_only:true " with result "error:etcdserver: leader changed" took too long (5.441164988s) to execute
2020-03-10 06:52:48.989587 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:etcdserver: leader changed" took too long (6.413985202s) to execute
2020-03-10 06:52:48.989608 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "error:etcdserver: leader changed" took too long (6.855475064s) to execute
2020-03-10 06:52:48.989662 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-test-nww2j.domain.local\" " with result "error:etcdserver: leader changed" took too long (8.298134763s) to execute
2020-03-10 06:52:48.989711 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "error:etcdserver: leader changed" took too long (7.036225425s) to execute
2020-03-10 06:52:48.989797 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/vsphere-csi-controller-0\" " with result "error:etcdserver: leader changed" took too long (8.298215679s) to execute
2020-03-10 06:52:48.989885 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/vsphere-csi-controller-0\" " with result "error:etcdserver: leader changed" took too long (8.298404651s) to execute
2020-03-10 06:52:48.989928 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy\" range_end:\"/registry/podsecuritypolicz\" count_only:true " with result "error:etcdserver: leader changed" took too long (4.478003141s) to execute
2020-03-10 06:52:48.990016 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "error:etcdserver: leader changed" took too long (2.824579593s) to execute
2020-03-10 06:52:48.990127 W | etcdserver: read-only range request "key:\"/registry/minions/test-nww2j.domain.local\" " with result "error:etcdserver: leader changed" took too long (8.283260657s) to execute
2020-03-10 06:52:48.990204 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "error:etcdserver: leader changed" took too long (7.103666864s) to execute
2020-03-10 06:52:48.990609 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "error:etcdserver: leader changed" took too long (7.391269809s) to execute
2020-03-10 06:52:48.990767 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/vsphere-csi-controller-0.15fadec5ee9df271\" " with result "error:etcdserver: leader changed" took too long (8.298998368s) to execute
2020-03-10 06:52:52.197235 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "error:context canceled" took too long (2.601028703s) to execute
WARNING: 2020/03/10 06:52:52 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-03-10 06:52:52.197309 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "error:context canceled" took too long (1.202667222s) to execute
WARNING: 2020/03/10 06:52:52 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-03-10 06:52:52.600013 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:context canceled" took too long (1.602640405s) to execute
WARNING: 2020/03/10 06:52:52 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-03-10 06:52:52.600102 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/vsphere-csi-controller-0\" " with result "error:context canceled" took too long (3.393979652s) to execute
WARNING: 2020/03/10 06:52:52 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-03-10 06:52:53.294309 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.00032947s) to execute
2020-03-10 06:52:55.989128 W | etcdserver: timed out waiting for read index response (local node might have slow network)
2020-03-10 06:52:55.989214 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "error:etcdserver: request timed out" took too long (8.013158313s) to execute
2020-03-10 06:52:55.989256 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "error:etcdserver: request timed out" took too long (7.467938232s) to execute
2020-03-10 06:52:55.989262 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/test-nww2j.domain.local\" " with result "error:etcdserver: request timed out" took too long (8.194834659s) to execute
2020-03-10 06:52:55.989273 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "error:etcdserver: request timed out" took too long (8.011395628s) to execute
2020-03-10 06:52:55.989285 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "error:etcdserver: request timed out" took too long (8.263718857s) to execute
2020-03-10 06:52:55.989309 W | etcdserver: read-only range request "key:\"/registry/secrets/kube-system/bootstrap-token-pe8j10\" " with result "error:etcdserver: request timed out" took too long (8.274704182s) to execute
2020-03-10 06:52:55.990210 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-r8hrk\" " with result "range_response_count:1 size:2170" took too long (6.984687565s) to execute
2020-03-10 06:52:55.990355 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:5" took too long (619.377829ms) to execute
2020-03-10 06:52:55.990544 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:7" took too long (1.396697703s) to execute
2020-03-10 06:52:55.990709 W | etcdserver: read-only range request "key:\"/registry/csinodes/test-md-0-7bf7c858b4-tgrnj.domain.local\" " with result "range_response_count:0 size:5" took too long (1.815462023s) to execute
2020-03-10 06:52:55.990893 W | etcdserver: read-only range request "key:\"/registry/roles\" range_end:\"/registry/rolet\" count_only:true " with result "range_response_count:0 size:7" took too long (2.193376442s) to execute
2020-03-10 06:52:55.991024 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations\" range_end:\"/registry/mutatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:5" took too long (2.479973479s) to execute
2020-03-10 06:52:55.991168 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" " with result "range_response_count:1 size:2559" took too long (2.51285342s) to execute
2020-03-10 06:52:55.991305 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:485" took too long (2.712016465s) to execute
2020-03-10 06:52:55.991434 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:485" took too long (3.234534203s) to execute
2020-03-10 06:52:55.991538 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" " with result "range_response_count:1 size:2559" took too long (3.439088526s) to execute
2020-03-10 06:52:55.991665 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/test-md-0-7bf7c858b4-tgrnj.domain.local\" " with result "range_response_count:0 size:5" took too long (6.787625316s) to execute
2020-03-10 06:52:55.991732 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:7" took too long (3.516677728s) to execute
2020-03-10 06:52:55.992082 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-test-4cgd4.domain.local\" " with result "range_response_count:1 size:2436" took too long (3.734483165s) to execute
2020-03-10 06:52:55.992228 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:172" took too long (6.883084794s) to execute
2020-03-10 06:52:55.992396 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-test-nww2j.domain.local\" " with result "range_response_count:1 size:2115" took too long (6.883358708s) to execute
2020-03-10 06:52:55.992478 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/vsphere-cloud-controller-manager-gcmzw.15fadebaa20a5d78\" " with result "range_response_count:1 size:588" took too long (4.169979679s) to execute
2020-03-10 06:52:55.992650 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:467" took too long (4.221376646s) to execute
2020-03-10 06:52:55.993003 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/vsphere-cloud-controller-manager-gcmzw\" " with result "range_response_count:1 size:2412" took too long (4.229107378s) to execute
2020-03-10 06:52:55.993151 W | etcdserver: read-only range request "key:\"/registry/namespaces\" range_end:\"/registry/namespacet\" count_only:true " with result "range_response_count:0 size:7" took too long (4.295641949s) to execute
2020-03-10 06:52:55.993507 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/kubeadm-config\" " with result "range_response_count:1 size:978" took too long (5.743122304s) to execute
2020-03-10 06:52:55.993910 W | etcdserver: read-only range request "key:\"/registry/minions/test-nww2j.domain.local\" " with result "range_response_count:1 size:2950" took too long (5.881292029s) to execute
2020-03-10 06:52:55.994105 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:7" took too long (6.152438061s) to execute
2020-03-10 06:52:55.995158 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-test-nww2j.domain.local.15fadec2a000a8ef\" " with result "range_response_count:0 size:5" took too long (6.518408262s) to execute
2020-03-10 06:52:55.995300 W | etcdserver: read-only range request "key:\"/registry/leases\" range_end:\"/registry/leaset\" count_only:true " with result "range_response_count:0 size:7" took too long (6.54746488s) to execute
2020-03-10 06:52:55.995524 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:3 size:9085" took too long (6.615095708s) to execute
2020-03-10 06:52:56.042164 I | embed: rejected connection from "127.0.0.1:38952" (error "EOF", ServerName "")
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/03/10 06:52:56 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-03-10 07:00:57.951454 I | mvcc: store.index: compact 1308
2020-03-10 07:00:57.973974 I | mvcc: finished scheduled compaction at 1308 (took 22.302067ms)
2020-03-10 07:05:57.954992 I | mvcc: store.index: compact 2224
2020-03-10 07:05:57.967820 I | mvcc: finished scheduled compaction at 2224 (took 12.510527ms)
2020-03-10 07:10:57.966323 I | mvcc: store.index: compact 3124
2020-03-10 07:10:57.978785 I | mvcc: finished scheduled compaction at 3124 (took 12.240293ms)

etcd logs on node 2:

2020-03-10 06:52:48.313477 I | etcdmain: etcd Version: 3.4.3
2020-03-10 06:52:48.313509 I | etcdmain: Git SHA: 3cf2f69b5
2020-03-10 06:52:48.313512 I | etcdmain: Go Version: go1.12.12
2020-03-10 06:52:48.313514 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-10 06:52:48.313517 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-10 06:52:48.313599 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-10 06:52:48.323242 I | embed: name = test-nww2j.domain.local
2020-03-10 06:52:48.323255 I | embed: data dir = /var/lib/etcd
2020-03-10 06:52:48.323259 I | embed: member dir = /var/lib/etcd/member
2020-03-10 06:52:48.323261 I | embed: heartbeat = 100ms
2020-03-10 06:52:48.323264 I | embed: election = 1000ms
2020-03-10 06:52:48.323266 I | embed: snapshot count = 10000
2020-03-10 06:52:48.323273 I | embed: advertise client URLs = https://192.168.192.158:2379
2020-03-10 06:52:48.400841 I | etcdserver: starting member 898af17853b90e32 in cluster c02cadd2631ba338
raft2020/03/10 06:52:48 INFO: 898af17853b90e32 switched to configuration voters=()
raft2020/03/10 06:52:48 INFO: 898af17853b90e32 became follower at term 0
raft2020/03/10 06:52:48 INFO: newRaft 898af17853b90e32 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-03-10 06:52:48.423443 W | auth: simple token is not cryptographically signed
2020-03-10 06:52:48.427679 I | rafthttp: started HTTP pipelining with peer caadd71e072faa6b
2020-03-10 06:52:48.427698 I | rafthttp: starting peer caadd71e072faa6b...
2020-03-10 06:52:48.427709 I | rafthttp: started HTTP pipelining with peer caadd71e072faa6b
2020-03-10 06:52:48.429188 I | rafthttp: started streaming with peer caadd71e072faa6b (writer)
2020-03-10 06:52:48.429660 I | rafthttp: started streaming with peer caadd71e072faa6b (writer)
2020-03-10 06:52:48.431855 I | rafthttp: started peer caadd71e072faa6b
2020-03-10 06:52:48.431877 I | rafthttp: added peer caadd71e072faa6b
2020-03-10 06:52:48.431886 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-03-10 06:52:48.431982 I | rafthttp: started streaming with peer caadd71e072faa6b (stream MsgApp v2 reader)
2020-03-10 06:52:48.432237 I | rafthttp: started streaming with peer caadd71e072faa6b (stream Message reader)
2020-03-10 06:52:48.438473 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-10 06:52:48.438598 I | embed: listening for metrics on http://127.0.0.1:2381
2020-03-10 06:52:48.439607 I | embed: listening for peers on 192.168.192.158:2380
2020-03-10 06:52:48.439785 I | rafthttp: peer caadd71e072faa6b became active
2020-03-10 06:52:48.439802 I | rafthttp: established a TCP streaming connection with peer caadd71e072faa6b (stream Message writer)
2020-03-10 06:52:48.440401 I | rafthttp: established a TCP streaming connection with peer caadd71e072faa6b (stream MsgApp v2 writer)
2020-03-10 06:52:48.444939 I | rafthttp: established a TCP streaming connection with peer caadd71e072faa6b (stream Message reader)
2020-03-10 06:52:48.450208 I | rafthttp: established a TCP streaming connection with peer caadd71e072faa6b (stream MsgApp v2 reader)
raft2020/03/10 06:52:49 INFO: 898af17853b90e32 [term: 0] received a MsgVote message with higher term from caadd71e072faa6b [term: 7]
raft2020/03/10 06:52:49 INFO: 898af17853b90e32 became follower at term 7
raft2020/03/10 06:52:49 INFO: 898af17853b90e32 [logterm: 0, index: 0, vote: 0] cast MsgVote for caadd71e072faa6b [logterm: 2, index: 747] at term 7
raft2020/03/10 06:52:49 INFO: raft.node: 898af17853b90e32 elected leader caadd71e072faa6b at term 7
raft2020/03/10 06:52:49 INFO: 898af17853b90e32 switched to configuration voters=(14604565690601941611)
2020-03-10 06:52:49.096715 I | etcdserver/membership: added member caadd71e072faa6b [https://192.168.192.163:2380] to cluster c02cadd2631ba338
2020-03-10 06:52:49.096844 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-10 06:52:49.096896 I | etcdserver/api: enabled capabilities for version 3.4
raft2020/03/10 06:52:49 INFO: 898af17853b90e32 switched to configuration voters=(9910999429085204018 14604565690601941611)
2020-03-10 06:52:49.193872 I | etcdserver/membership: added member 898af17853b90e32 [https://192.168.192.158:2380] to cluster c02cadd2631ba338
2020-03-10 06:52:49.194796 I | etcdserver: published {Name:test-nww2j.domain.local ClientURLs:[https://192.168.192.158:2379]} to cluster c02cadd2631ba338
2020-03-10 06:52:49.194860 I | embed: ready to serve client requests
2020-03-10 06:52:49.194913 I | embed: ready to serve client requests
2020-03-10 06:52:49.201141 I | embed: serving client requests on 192.168.192.158:2379
2020-03-10 06:52:49.201168 I | embed: serving client requests on 127.0.0.1:2379
2020-03-10 07:00:57.993030 I | mvcc: store.index: compact 1308
2020-03-10 07:00:58.015539 I | mvcc: finished scheduled compaction at 1308 (took 22.289788ms)
2020-03-10 07:05:57.977731 I | mvcc: store.index: compact 2224
2020-03-10 07:05:57.990493 I | mvcc: finished scheduled compaction at 2224 (took 12.471121ms)
2020-03-10 07:10:57.986801 I | mvcc: store.index: compact 3124
2020-03-10 07:10:57.999384 I | mvcc: finished scheduled compaction at 3124 (took 12.321011ms)

The read-only range request errors appear to be a red herring during sync, I/O stats seem OK:

crictl exec -it 3cc etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/ca.crt  --key=/etc/kubernetes/pki/etcd/ca.key --endpoints=localhost:2379 check perf
 60 / 60 Boooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo! 100.00% 1m0s
PASS: Throughput is 150 writes/s
PASS: Slowest request took 0.068823s
PASS: Stddev is 0.003320s
PASS

Tests done on rc3, K8s 1.17.3

@fabriziopandini
Copy link
Member

CAPZ issue seem fixed now (root cause a misconfiguration in the template)
@CecileRobertMichon please confirm

@neolit123
Copy link
Member

@randomvariable

Where the CNI pod network and host network overlap, at least for Calico 1.12, once the CNI pod is active, inter-host routing is dead.

can the issue be resolved if the Pod CIDR is modified? this obviously requires changing CALICO_IPV4POOL_CIDR and the kubeadm ClusterConfiguration.

@randomvariable
Copy link
Member Author

@neolit123 yeah, we had hard-coded the value in the CAPV template to reduce the number of variables since clusterctl doesn't support default values yet, but will move to make it mandatory a large number of CAPV users will have clusters in 192.168.0.0/16.

As for the other issue, that's already tracked in kubernetes-sigs/cluster-api-provider-vsphere#782 . cc @johnlam90

@CecileRobertMichon
Copy link
Contributor

I actually just got this error again with capz (with fixed JoinConfiguration this time)

[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
{"level":"warn","ts":"2020-03-10T17:06:06.640Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.0.0.5:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
error execution phase control-plane-join/update-status: error uploading configuration: unable to create ConfigMap: rpc error: code = Unavailable desc = etcdserver: leader changed
To see the stack trace of this error execute with --v=5 or higher
Cloud-init v. 19.4-33-gbb4131a2-0ubuntu1~18.04.1 running 'modules:final' at Tue, 10 Mar 2020 17:05:16 +0000. Up 34.85 seconds.
2020-03-10 17:06:17,866 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/runcmd [1]
2020-03-10 17:06:17,891 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
2020-03-10 17:06:17,901 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py'>) failed
Cloud-init v. 19.4-33-gbb4131a2-0ubuntu1~18.04.1 finished at Tue, 10 Mar 2020 17:06:18 +0000. Datasource DataSourceAzure [seed=/dev/sr0].  Up 96.93 seconds

@jayunit100
Copy link
Contributor

jayunit100 commented Mar 10, 2020

I really feel like theres got to be some kind of chaos / overlapping IPs / missing kube-proxy / kill nodes in etcd randomly / test we can run on CAPD to induce this. I know @randomvariable said that in general, nobody has seen it there, but i'd venture to say, if something doesn't happen on CAPD (granted it might require the correct stressor or scenario to simulate cloud latency ) then we cant really call it a CAPI bug, and it has to be a provider triage.

@jayunit100
Copy link
Contributor

ill file a separate issue for this idea and see what others think

@CecileRobertMichon
Copy link
Contributor

this seems to have been fixed by v0.3.0 for me... Still unsure what the exact root cause is 😕

@randomvariable
Copy link
Member Author

we've fixed our HAProxy issue as well in kubernetes-sigs/cluster-api-provider-vsphere#823, so would be happy to close.

@randomvariable
Copy link
Member Author

Closing this issue.
File separate issues for each individual case.

/close

@k8s-ci-robot
Copy link
Contributor

@randomvariable: Closing this issue.

In response to this:

Closing this issue.
File separate issues for each individual case.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@johnlam90
Copy link

Closing this issue.
File separate issues for each individual case.

/close

Thanks with CAPV v6.0 and v0.3.0 , this resolved my issue. Tried it about 5 times with no issues .

@vincepri vincepri added the area/control-plane Issues or PRs related to control-plane lifecycle management label Mar 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/control-plane Issues or PRs related to control-plane lifecycle management kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

10 participants