Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(kuma-cp) use ipv4 when ipv4 and ipv6 are available #1947

Closed
wants to merge 1 commit into from

Conversation

bartsmykla
Copy link
Contributor

Summary

When doing demo, Jakub realized something was wrong and Kuma was
not working on Google's GKE with Kubernetes 1.19.9 and kuma-init
containers injected to demos' services were failing without
any logs or errors.

Everything was working fine on GKE with Kubernetes 1.18.x though,
so we realized containers in the newer versions have attached
both, IPv4 and IPv6 IP addresses.

After adding --verbose flag to kuma-init we found the message:

ip6tables-restore v1.8.4 (legacy): ip6tables-restore: unable to
initialize table 'nat'

Error occurred at line: 1

so it reassured me the problem is related to IPv6, when tried to
manually play with ip6tables it became clear to me that
something is wrong with kernel modules related to iptables for ipv6
and then I figured out that GKE is not supporting IPv6 at all,
even if network interfaces are receiving IPv6 addresses.

Because in our code, if after asking for IP address of network
interface, we would receive IPv6 address first, we were assuming
we can use IPv6.

I rolled back to the solution where we are asking for nic
IPv4 addresses and if there is none, but there is an address
assigned, we are assuming it has to be IPv6.

It means every time when there will be both - IPv4 and IPv6
available, we will be using IPv4 (unfortunately)

@bartsmykla bartsmykla requested a review from a team as a code owner May 5, 2021 13:23
When doing demo, Jakub realised something was wrong and Kuma was
not working on Google's GKE with Kubernetes 1.19.9 and `kuma-init`
containers injected to demos' services were failing without
any logs or errors.

Everything was working fine on GKE with Kubernetes 1.18.x though,
so we realized containers in the newer versions have attached
both, IPv4 and IPv6 IP addresses.

After adding `--verbose` flag to `kuma-init` we found the message:
```
ip6tables-restore v1.8.4 (legacy): ip6tables-restore: unable to
initialize table 'nat'

Error occurred at line: 1
```

so it reassured me the problem is related to IPv6, when tried to
manually play with `ip6tables` it became clear to me that
something is wrong with kernel modules related to iptables for ipv6
and then I figured out that GKE is not supporting IPv6 at all,
even if network interfaces are receiving IPv6 addresses.

Because in our code, if after asking for IP address of network
interface, we would receive IPv6 address first, we were assuming
we can use IPv6.

I rolled back to the solution where we are asking for nic
IPv4 addresses and if there is none, but there is an address
assigned, we are assumming it has to be IPv6.

It means everytime when there will be both - IPv4 and IPv6
available, we will be using IPv4 (unfortunately)

Signed-off-by: Bart Smykla <bartek@smykla.com>
@nickolaev
Copy link
Contributor

OK this might be something specific to this particular release of GKE where "GKE is not supporting IPv6 at all, even if network interfaces are receiving IPv6 addresses."
I don't think we should change our default behavior just for this corner case. How come the interface gets an IPv6 address in the first place?
I might agree to have a "force IPv4" flag, but completely defaulting to IPv4 is just wrong for me.
(for reference see how curl's approach -> https://daniel.haxx.se/blog/2020/04/20/curl-ootw-ipv4/)

@bartsmykla
Copy link
Contributor Author

This PR is not relevant anymore, as the issue was "fixed" by: #2051

@bartsmykla bartsmykla closed this May 28, 2021
@bartsmykla bartsmykla deleted the fix/ipv6 branch May 28, 2021 03:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants