-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linkerd CNI repair controller does not listen on IPv6 #12864
Labels
Comments
alpeb
added a commit
that referenced
this issue
Jul 22, 2024
Fixes #12864 The linkerd-cni network-validator container was binding to the IPv4 wildcard and connecting to an IPv4 address. This wasn't breaking things in IPv6 clusters but it was only validating the iptables rules and not the ip6tables ones. This change introduces logic to use addresses according to the value of `disableIPv6`. If IPv6 is enabled, then the ip6tables rules would get exercised. Note that a more complete change would also exercise both iptables and ip6tables, but for now we're defaulting to ip6tables. This implied changing the helm value `networkValidator.connectAddr` to `connectPort`. @mateiidavid could you please validate if this entry with its simplified doc still makes sense, in light of #12797 ? Similarly was the case with repair-controller, but given the IPv4 wildcard was used for the admin server, in IPv6 clusters the kubelet wasn't able to reach the probe endpoints and the container was failing. In this case the fix is just have the admin server bind to `[::]`, which works for IPv4 and IPv6 clusters.
Thanks for bringing this up; I was able to replicate the issue. I've raised #12874; please keep an eye for when that gets included in an edge, and let us know how it goes! :-) |
alpeb
added a commit
that referenced
this issue
Jul 23, 2024
Fixes #12864 The linkerd-cni network-validator container was binding to the IPv4 wildcard and connecting to an IPv4 address. This wasn't breaking things in IPv6 clusters but it was only validating the iptables rules and not the ip6tables ones. This change introduces logic to use addresses according to the value of `disableIPv6`. If IPv6 is enabled, then the ip6tables rules would get exercised. Note that a more complete change would also exercise both iptables and ip6tables, but for now we're defaulting to ip6tables. This implied changing the helm value `networkValidator.connectAddr` to `connectPort`. @mateiidavid could you please validate if this entry with its simplified doc still makes sense, in light of #12797 ? Similarly was the case with repair-controller, but given the IPv4 wildcard was used for the admin server, in IPv6 clusters the kubelet wasn't able to reach the probe endpoints and the container was failing. In this case the fix is just have the admin server bind to `[::]`, which works for IPv4 and IPv6 clusters.
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
What is the issue?
Repair controller fail to start on EKS IPv6 cluster. Admin is only listening at 0.0.0.0:9990
How can it be reproduced?
Start a IPv6 only cluster and instal linkerd cni with repair controller enabled.
Logs, error output, etc
None
output of
linkerd check -o short
Environment
EKS 1.18
Possible solution
Change admin address to listen on IPv6 as well.
[::]:9990
Additional context
No response
Would you like to work on fixing this bug?
yes
The text was updated successfully, but these errors were encountered: