-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading docker 1.13 on nodes causes outbound container traffic to stop working #40182
Comments
@kubernetes/sig-node-misc |
Can confirm the problem with k8s 1.4.7 & docker 1.13 on debian jessie. kubelet managed by systemd |
Since the team @Kargakis tagged here is no longer a team... cc: @kubernetes/sig-node-bugs |
Docker 1.13 changed the default iptables forwarding policy to DROP, which has effects like this. You can change the policy to ACCEPT (which it was in Docker 1.12 and before) by running:
on every node. You need to run this in the host network namespace, not inside a pod namespace. |
Tested out and working Environment:
|
Could someone explain why docker defaulting to DROP? Does this mean containers of docker v1.13 can't connect outside by default? |
@feiskyer generally the Linux default is to have IP forwarding off. Docker add two specific rules which allow traffic off their bridge, and replies to come back:
CNI providers which do not use the |
@colemickens can you clarify which network plugin you are using - is it |
This thread sure was a lifesaver, though sadly I found it about 14 hours too late.. I managed to wipe my entire cluster and reinstall everything, with the same issue still persisting. I was about to lose my mind trying to figure out why half of my original cluster was working and the other wasn't. Those which didn't work were installed and added later with docker 1.13, so this explains everything. Now I've got everything up and running again! Thanks again for this 👍 |
The docker change that caused this: moby/moby#28257 |
(@bboreham Yes, it was |
The way that services work with iptables mean that we will be initiating connections in to the bridge. As such, we need to be more permissive than the
This says: forward stuff both in and out to |
It used to be recommended to start Docker with ref. https://kubernetes.io/docs/getting-started-guides/scratch/#docker |
Personally I would be happier with the minimum required to make Kubernetes work, rather than blanket turning on IP forwarding from anywhere to anywhere. Also, it seems to me that specifying |
This also appears to stop NodePort services from working. (Thanks to @bboreham for the workaround!) |
sorry -- accidental close |
@euank I'm not entirely convinced this is the responsibility of CNI plugins. It's discussed above, and some plugins do include workarounds, but manipulating the host to allow forwarding of traffic feels squarely outside of a CNI plugin's responsibilities as I understand them. I think we still need a solution for this in Kubernetes. CC @tmjd |
We're adding a "allow this interface" chained plugin in to the CNI repository. It's a clean solution to this problem. |
@bboreham it works. Thank you! |
@squeed iptables-allow is cool. We should document this clearly when the plugin is ready. |
#52569 looks to be the proper fix for this (since v1.9.0-alpha.2). |
closed with #52569 |
Docker 1.13 changed how it set up iptables in a way that broke forwarding. We previously got away with it because we set the ip_forward sysctl, which meant that docker wouldn't change the rule. But if we're using an image that preinstalled docker, docker might have already reconfigured iptables before we run, and we didn't set it back. We now set it back. kubernetes/kubernetes#40182
To make this easier for those in the future: this got merged and released in 1.8.4. |
As this issue was already solved kubernetes/kubernetes#40182, we do not need to perform the sudo iptables -P FORWARD ACCEPT. Fixes kata-containers#488 Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
As this issue was already solved kubernetes/kubernetes#40182, we do not need to perform the sudo iptables -P FORWARD ACCEPT. Fixes kata-containers#488 Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
this does not make it work even though the packet forward use docker0, because the packet from other node will not be in ctstate RELATED or ESTABLISHED, the first packet will be NEW, and this packet will be drop, so it will never be in ESTABLISHED state |
I experienced the same issue on Kubernetes with the Calico network stack under Debian Buster. Checking a lot of configs and parameters, I ended up with getting it to work by changing the policy for the forward rule to ACCEPT. This made it clear that the issue is somewhere around the firewall. Running iptables -L gave me the following unveiling warning: # Warning: iptables-legacy tables present, use iptables-legacy to see them The output given by the list command does not contain any Calico rules. Running iptables-legacy -L showed me the Calico rules, so it seems obvious now why it didn't work. So Calico seems to use the legacy interface. The issue is the change in Debian to iptables-nft in the alternatives, you can check via: ls -l /etc/alternatives | grep iptables Doing the following:
Now it works all fine! Thanks to Long at the Kubernetes Slack channel for pointing the route to solving it. |
Kubernetes version (use
kubectl version
): v1.4.6, v1.5.1, likely many versionsEnvironment:
uname -a
): latest 16.04-LTS kernelConfiguration Details:
kubelet
runs in a containerkube-proxy
runs in iptables mode via a daemonsetWhat happened:
After upgrading to docker 1.13.0 on the nodes, outbound container traffic stops working
What you expected to happen:
Outbound container traffic to work (aka, I can hit the internet and service ips from inside the container)
How to reproduce it (as minimally and precisely as possible):
Deploy an ACS Kubernets cluster. If the workaround has rolled out, then force upgrade docker to 1.13 (you'll have to remove a pin we're setting in /etc/apt/preferences.d).
Unclear if this repros on other configurations right now.
Anything else do we need to know:
No, I just don't know where/how to best troubleshoot this.
The text was updated successfully, but these errors were encountered: