You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Better support when kafka is running in kubernetes.
When setting up kafka in kubernetes, one can setup the client's bootstrap_uri to point to a kafka headless service and take advantages of k8s's dns to get back a list of canonical kafka broker pod names (FQDNs) that can then be used by the client to establish a proper broker bootstrap connection.
In order for that to work when the kafka brokers are setup with SSL or SASL_SSL, kafka clients must set client.dns.lookup=resolve_canonical_bootstrap_servers_only. This is because proper ssl support requires clients to connect to the broker using an FQDN and the cert returned by the broker will need to have a subjectAltName field that can match the FQDN).
What is currently missing?
Better support when kafka is running in kubernetes.
When setting up kafka in kubernetes, one can setup the client's bootstrap_uri to point to a kafka headless service and take advantages of k8s's dns to get back a list of canonical kafka broker pod names (FQDNs) that can then be used by the client to establish a proper broker bootstrap connection.
In order for that to work when the kafka brokers are setup with SSL or SASL_SSL, kafka clients must set client.dns.lookup=resolve_canonical_bootstrap_servers_only. This is because proper ssl support requires clients to connect to the broker using an FQDN and the cert returned by the broker will need to have a subjectAltName field that can match the FQDN).
The suggestion is to add a setting that functions like client.dns.lookup. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-602%3A+Change+default+value+for+client.dns.lookup
The text was updated successfully, but these errors were encountered: