-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract entrypoint command from bin scripts, try native-image zookeeper #311
Conversation
Experimental, based on the assumption that things like broker.rack and listener addresses can be set using --overide flags with ev expansion. If we had the statefulset por-ordinal label we could set those things too, but now we need to rely on the pod name (+ for zookeeper ID_OFFSET).
b3f5896
to
8c31ca4
Compare
8c31ca4
to
76eee47
Compare
through the service. This helps troubleshoot issues like #310 by pointing out (by podIP) which actual zookeeper connection that failed. Also I like the simplification.
the pod name or statefulset.kubernetes.io/pod-name can be used via the downward api in args to do things like --override listeners=PLAINTEXT://$(POD_NAME).kafka:9092 Once again it's unfortunate that the statefulset label is pod name, not pod index. Also makes sure that DNS entries are published prior to readiness so clusters don't get into loops of not being able to find each other.
…rs-support Rename the statefulset's headless service from broker to kafka
client sessionid 0x0, likely client has closed socket"
this value should be increased or set to 0(infinite retry) to overcome issues related to DNS name resolving. But I'm not sure if "Java system property only" means that this conf entry has no effect.
I had a very strange init error that isn't reproducible: the symlink operation failed, complaining that the target path exists.
After delete pod the next pod came up as usual. |
because in a resource strapped dev environment kafka will often crashloop several times while waiting for zookeeper, and JVM starts are heavy.
while GKE had no such issues, probably because of fsGroup
14aedc4
to
53326d9
Compare
53326d9
to
4ab73a0
Compare
interesting now that hooks are removed
4173241
to
372497f
Compare
brokers failing to become ready while zoo pods (two out of three) logged WARN Message:Error accepting new connection: Too many connections from [IP]
we don't need to sed the pod's own entry to 0.0.0.0
To summarize changes from v6.0.4: The ambition was to keep the base folders kafka and zookeeper unchanged, but we had to do some changes to init scripts and container args in order to support an updated nonroot base and the new native base.
|
so that it matches the zookeeper.connect property that is actual pods now
Use the command generated by Kafka's ./bin/*.sh as entrypoints. Given the same env these scripts simply produce the same command every time, and append args. See #309 and https://github.com/solsson/dockerfiles/blob/master/hooks/build.
Now things like #306 should be done with
--override
.