You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 28, 2020. It is now read-only.
example-0001 starts at 17:51:19.878104 while example-0000 is down
Shouldn't operator double-check example-0000's health status before adding example-0001?
If this is already handled by operator, please feel free to close.
Please ignore the etcd panic message for now (since it will be fixed when we upgrade etcd's gRPC dependency to >v1.7).
kubectl logs -f example-0000
2017-11-01 17:51:19.118506 I | rafthttp: started peer 97c53354349fd849
2017-11-01 17:51:19.118550 I | rafthttp: added peer 97c53354349fd849
2017-11-01 17:51:19.118901 I | rafthttp: started streaming with peer 97c53354349fd849 (stream MsgApp v2 reader)
2017-11-01 17:51:19.119670 I | rafthttp: started streaming with peer 97c53354349fd849 (stream Message reader)
panic: send on closed channel
goroutine 165 [running]:
github.com/coreos/etcd/cmd/vendor/google.golang.org/grpc/transport.(*serverHandlerTransport).do(0xc42014b1f0, 0xc42028a660, 0x0, 0x0)
/etcd/gopath/src/github.com/coreos/etcd/cmd/vendor/google.golang.org/grpc/transport/handler_server.go:170 +0x115
github.com/coreos/etcd/cmd/vendor/google.golang.org/grpc/transport.(*serverHandlerTransport).WriteStatus(0xc42014b1f0, 0xc4201cf700, 0xc42000c070, 0xc420180d50, 0xfd4120)
kubectl logs -f example-0001
2017-11-01 17:51:19.878104 I | etcdmain: etcd Version: 3.2.0+git
2017-11-01 17:51:19.878159 I | etcdmain: Git SHA: 029f858
2017-11-01 17:51:19.878173 I | etcdmain: Go Version: go1.9.2
2017-11-01 17:51:19.878176 I | etcdmain: Go OS/Arch: linux/amd64
2017-11-01 17:51:19.878180 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2017-11-01 17:51:19.878217 I | embed: peerTLS: cert = /etc/etcdtls/member/peer-tls/peer.crt, key = /etc/etcdtls/member/peer-tls/peer.key, ca = , trusted-ca = /etc/etcdtls/member/peer-tls/peer-ca.crt, client-cert-auth = true, crl-file =
2017-11-01 17:51:19.879046 I | embed: listening for peers on https://0.0.0.0:2380
2017-11-01 17:51:19.879084 I | embed: listening for client requests on 0.0.0.0:2379
2017-11-01 17:51:19.888968 I | etcdserver: #0: startig get to "https://example-0000.example.default.svc:2380/members"
2017-11-01 17:51:21.889370 W | etcdserver: could not get cluster response from "https://example-0000.example.default.svc:2380": Get https://example-0000.example.default.svc:2380/members: dial tcp 10.60.9.102:2380: i/o timeout
I will keep you updated.
I am investigating gRPC upgrade issues in etcd anyway.
hongchaodeng
changed the title
etcd: seed member panic, second member is still being added
etcd 3.2: seed member panic, second member is still being added
Nov 7, 2017
example-0000
starts fineexample-0000
is healthy, so startexample-0001
example-0000
panic at17:51:19.119670
example-0001
starts at17:51:19.878104
whileexample-0000
is downShouldn't operator double-check
example-0000
's health status before addingexample-0001
?If this is already handled by operator, please feel free to close.
Please ignore the etcd panic message for now (since it will be fixed when we upgrade etcd's gRPC dependency to >v1.7).
kubectl logs -f example-0000
kubectl logs -f example-0001
/cc @fanminshi
The text was updated successfully, but these errors were encountered: