Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port redirecting binding to IPv6 but not IPv4 interfaces. #2174

Closed
marklit opened this issue Oct 11, 2013 · 160 comments
Closed

Port redirecting binding to IPv6 but not IPv4 interfaces. #2174

marklit opened this issue Oct 11, 2013 · 160 comments
Labels
area/networking exp/expert kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.

Comments

@marklit
Copy link

marklit commented Oct 11, 2013

Is there a way I can tell docker to only bind redirected ports to IPv4 interfaces?

I have a machine running with IPv6 disabled:

# echo '1' > /proc/sys/net/ipv6/conf/lo/disable_ipv6  
# echo '1' > /proc/sys/net/ipv6/conf/lo/disable_ipv6  
# echo '1' > /proc/sys/net/ipv6/conf/all/disable_ipv6  
# echo '1' > /proc/sys/net/ipv6/conf/default/disable_ipv6
# /etc/init.d/networking restart

ifconfig reports there are no IPv6-enabled interfaces:

# ifconfig
docker0   Link encap:Ethernet  HWaddr 00:00:00:00:00:00  
          inet addr:172.17.42.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1372 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7221 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:88091 (88.0 KB)  TX bytes:10655750 (10.6 MB)

eth0      Link encap:Ethernet  HWaddr 04:01:08:c1:b1:01  
          inet addr:198.XXX.XXX.XXX  Bcast:198.199.90.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:97602 errors:0 dropped:4 overruns:0 frame:0
          TX packets:15362 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:141867997 (141.8 MB)  TX bytes:1376970 (1.3 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

lxcbr0    Link encap:Ethernet  HWaddr 9e:51:04:ed:13:d4  
          inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

When I launch a new docker container and ask it to port forward 8000 to 8000 in the container it does so only on IPv6 interfaces. Is there a way to make it only bind to IPv4 interfaces?

# docker run -p 8000:8000 -i -t colinsurprenant/ubuntu-raring-amd64 /bin/bash

When I check with lsof it says that only IPv6-related bindings have been made:

# lsof -OnP | grep LISTEN
sshd      1275             root    3u     IPv4 ... TCP *:22 (LISTEN)
sshd      1275             root    4u     IPv6 ... TCP *:22 (LISTEN)
dnsmasq   2975      lxc-dnsmasq    7u     IPv4 ... TCP 10.0.3.1:53 (LISTEN)
docker    9629             root    7u     IPv6 ... TCP *:8000 (LISTEN)
docker    9629 9630        root    7u     IPv6 ... TCP *:8000 (LISTEN)
docker    9629 9631        root    7u     IPv6 ... TCP *:8000 (LISTEN)
docker    9629 9632        root    7u     IPv6 ... TCP *:8000 (LISTEN)
docker    9629 9633        root    7u     IPv6 ... TCP *:8000 (LISTEN)
docker    9629 9634        root    7u     IPv6 ... TCP *:8000 (LISTEN)
docker    9629 9698        root    7u     IPv6 ... TCP *:8000 (LISTEN)
@jpetazzo
Copy link
Contributor

I believe that while IPv6 is disabled on all interfaces, it is not disabled on the whole machine. In other words, even if there is no IPv6 interface or address present at the moment, there might be one in the future. So when Docker tells to the kernel "please bind my sockets to all available addresses", it will include IPv6.

When you try to connect to your IPv4 address (e.g. 127.0.0.1:8000) does it work or not?

  • If it doesn't work, it is indeed a serious bug!
  • If it works, then can you explain why the behavior is a problem, so we can find the best fix?

Thank you!

@marklit
Copy link
Author

marklit commented Oct 11, 2013

No I can't connect on 127.0.0.1:8000. The lsof list there is complete and nothing from docker is binded to an IPv4 interface. This was on Ubuntu 13.04 64-bit.

@jpetazzo
Copy link
Contributor

OK! I was asking because on my machine, many sockets show as IPv6 even though IPv4 works fine. Thanks for the precision. We'll try to reproduce here.

@marklit
Copy link
Author

marklit commented Oct 11, 2013

I ran all the above on Digital Ocean on their Ubuntu 13.04 x64 image (#350076).

@juddmaltin-dell
Copy link

[SOLVED] pebcak, picnic.

I'm hitting this too. (frowny)

uname -a

Linux d08-00-27-49-4f-76 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

docker -v

Docker version 0.6.4, build 2f74b1c

cat /etc/issue

Ubuntu 12.04.3 LTS \n \l

@newgoliath
Copy link

I was stupidly trying to attach to the port running in the container, not the port on the host OS.

@crosbymichael
Copy link
Contributor

@marklit Are you still encountering this issue with a newer version of docker ? We made a lot of fixes to the networking stack.

@phsilva
Copy link

phsilva commented Dec 16, 2013

Still happening on 0.7.1.

@gvangool
Copy link

I have installed it on clean Centos 6.5. And Docker works out-of-the box (epel installs Docker version 0.7.0, build 0ff9bc1/0.7.0).

But my containers only bind on the IPv6 side, not on IPv4.

# netstat -ntple
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       User       Inode      PID/Program name
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      0          7904       898/sshd
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      0          8151       926/sendmail
tcp        0      0 :::80                       :::*                        LISTEN      0          8760       966/docker
tcp        0      0 :::22                       :::*                        LISTEN      0          7906       898/sshd
tcp        0      0 :::443                      :::*                        LISTEN      0          8755       966/docker
# docker ps
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                                      NAMES
51bd237afd47        proxy:latest              nginx               14 minutes ago      Up 12 minutes       0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp   lb0
#  uname -a                                                                                                                                                                  
Linux docker0 2.6.32-431.1.2.0.1.el6.x86_64 #1 SMP Fri Dec 13 13:06:13 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

If you need extra information, or a test machine. Let me know.

@bharrisau
Copy link

Using Docker version 0.7.1, build 8088bc1/0.7.1. I get the same, except it all works with IPv4.

e.g. If I were to do 'telnet -4 localhost 80' in the example above it would connect through. It doesn't work for external connections, but I think that is a different issue.

@aheissenberger
Copy link

I have the same problem with Version 0.7.3 that after starting boot2docker only adding 0.0.0.0 works:
docker run -d -p 0.0.0.0::11211 mc

this does not work:
docker run -d -p 11211 mc

in both cases the result from docker ps is `0.0.0.0:49154->11211/tcp'
and netstat shows that there was only an IP6 Binding:

sudo netstat -ntple
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      520/sshd
tcp        0      0 :::49153                :::*                    LISTEN      684/docker
tcp        0      0 :::4243                 :::*                    LISTEN      684/docker
tcp        0      0 :::22                   :::*                    LISTEN      520/sshd

the funny thing is that any further process started will work with docker run -d -p 11211 mc

@bharrisau
Copy link

From https://groups.google.com/d/msg/golang-nuts/F5HE7Eqb6iM/q_um2VqT5vAJ

on linux, by default, net.ipv6.bindv6only is 0, so ipv4 packets could also be received from
ipv6 sockets with ipv4-mapped ipv6 address. thus you only need to listen on tcp6 socket
and we can support both ipv4 and ipv6.

if you want explicitly only listen on ipv4 port, you will have to use net.Listen("tcp4", "0.0.0.0:3000")
and then pass the listener to http.Serve.

This is why binding to the IPv6 loopback also binds to the IPv4 loopback (though netstat won't show it). Most of the work is done by the iptables -t nat stuff anyway.

@matschaffer
Copy link

FWIW, I found this issue trying to figure out why a port mapping wouldn't work from my host OS (host -> vagrant -> docker container). I tried another box and it worked even though I only had the tcp6 port listed in netstat. Thinking something else may be happening here but not sure what.

UPDATE: yeah, just destroyed and recreated the VM and now it's fine. Yay computers ;)

@shulcsm
Copy link

shulcsm commented Feb 1, 2014

Having same issue with 0.7.6 are there any workarounds?

@farcaller
Copy link
Contributor

It seems that you can bind to IPv4-only port with -p HOST_IPV4ADDR:PORT:PORT. We need a better solution, though.

@teepark
Copy link

teepark commented Feb 5, 2014

I'm having this issue as well (brought it up in freenode a few times today).

using the host portion of the -p flag (-p ADDR:PORT:PORT) doesn't fix it for me.

I'm on 0.7.6.

@gesellix
Copy link
Contributor

like described at my blog post you could try to enable packet forwarding for ipv6 by adding the following line to your /etc/sysctl.conf:
net.ipv6.conf.all.forwarding=1

aheissenberger added a commit to aheissenberger/boot2docker that referenced this issue Feb 17, 2014
aheissenberger added a commit to aheissenberger/boot2docker that referenced this issue Feb 17, 2014
steeve added a commit to boot2docker/boot2docker that referenced this issue Feb 20, 2014
@tianon
Copy link
Member

tianon commented Feb 21, 2014

As @bharrisau mentioned, can someone who's running into this paste the output of sysctl net.ipv6.bindv6only ?

I'm very interested in helping track a proper fix to this down, because setting "net.ipv6.conf.all.forwarding" is pretty much always wrong for networks that actually have IPv6.

@phsilva
Copy link

phsilva commented Feb 21, 2014

With net.ipv6.conf.all.forwarding=1 it works for me now and net.ipv6.bindv6only=0 on my system. netstat still shows only tcp6 bind but curl to the ipv4 ip works, so forwarding does the job.

@bharrisau
Copy link

I think (from memory) you can force the binding to IPv4 in the proxy setup
function.

On my server the NAT rules change the target address and forwards the
packet that way. Nothing is actually using the proxy port. So I still think
any issues people are having are caused by something else like firewall
rules.

With net.ipv6.conf.all.forwarding=1 it works for me now and
net.ipv6.bindv6only=0 on my system. netstat still shows only tcp6 bind but
curl to the ipv4 ip works, so forwarding does the job.

Reply to this email directly or view it on
GitHubhttps://github.com//issues/2174#issuecomment-35696120
.

@gvangool
Copy link

So, I've tried it on a clean install of CentOS with the latest docker from the epel repository (0.8.0). And it just works™.

In netstat it's still shown as an IPv6 binding, but it works on IPv4 as well.

@gesellix
Copy link
Contributor

gesellix commented Mar 1, 2014

@tianon for the record, this is my configuration:

vagrant@vagrant-ubuntu-saucy-64:~$ sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
vagrant@vagrant-ubuntu-saucy-64:~$ sysctl net.ipv6.conf.all.forwarding
net.ipv6.conf.all.forwarding = 1
vagrant@vagrant-ubuntu-saucy-64:~$ uname -a
Linux vagrant-ubuntu-saucy-64 3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
vagrant@vagrant-ubuntu-saucy-64:~$ cat /etc/lsb-release 
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=13.10
DISTRIB_CODENAME=saucy
DISTRIB_DESCRIPTION="Ubuntu 13.10"

@boxofrox
Copy link

I just ran into this problem with docker 0.9.0 on boot2docker 0.7.0. Cannot access a container (jenkins) on port 8080 using ipv4. Even from the localhost.

docker@boot2docker:~$ sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
docker@boot2docker:~$ sysctl net.ipv6.conf.all.forwarding
net.ipv6.conf.all.forwarding = 1
docker@boot2docker:~$ sudo netstat -tapn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      544/sshd
tcp        0    192 192.168.70.166:22       192.168.70.218:43752    ESTABLISHED 1111/sshd: docker [
tcp        0      0 :::22                   :::*                    LISTEN      544/sshd
tcp        0      0 :::8080                 :::*                    LISTEN      567/docker
tcp        0      0 :::4243                 :::*                    LISTEN      567/docker
docker@boot2docker:~$ telnet 127.0.0.1 8080
Connection closed by foreign host
docker@boot2docker:~$ ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
ECDSA key fingerprint is 86:25:9e:72:dd:fd:0f:37:3b:58:e7:13:3d:c0:7a:30.
Are you sure you want to continue connecting (yes/no)? no
Host key verification failed.

I threw the ssh attempt to show that accessing a local service works if it binds on ipv4.
Adding dump of my sysctl net. https://gist.github.com/boxofrox/46298a51a11d2cf87bbe

@mik3y
Copy link

mik3y commented Mar 23, 2014

@boxofrox have you tried launching jenkins such that it binds to 0.0.0.0 (and not localhost)? Based on discussion in #4021 it appears to solve the issue (as it did for me).

@boxofrox
Copy link

@mik3y, you mean jenkins is binding to 127.0.0.1 within the container? This is not the case. I can run the jenkins container on my Arch Linux system and the same test succeeds. I also ran netstat -tapn within my jenkins container. It binds to IPv6 for all addresses.

root@d26504ec7def:/# netstat -tapn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      15/sshd
tcp6       0      0 :::8080                 :::*                    LISTEN      20/java
tcp6       0      0 :::22                   :::*                    LISTEN      15/sshd

The fact that the same container works on Arch Linux, but not on boot2docker Linux indicates that something may not be configured correctly in boot2docker. What that misconfiguration is, I don't know. The few relevant google results I find indicate that net.ipv6.bindv6only=0 and net.ipv6.conf.all.forwarding=1 should suffice.

@mrjana mrjana self-assigned this Oct 10, 2016
@RushOnline
Copy link

@mrjana actually docker-proxy don't listen on ipv4 if you specify wildcard (0.0.0.0).

@mrjana
Copy link
Contributor

mrjana commented Oct 11, 2016

@RushOnline Are you sure?

[ec2-user@ip-172-31-27-46 ~]$ docker run -id -p 9090:80 nginx
594451ba44449a72d39b0bc88b568b3a278cc872bf1de961c241e392ef817772
[ec2-user@ip-172-31-27-46 ~]$ sudo netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      4475/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2747/master
tcp6       0      0 :::2377                 :::*                    LISTEN      28350/dockerd
tcp6       0      0 :::7946                 :::*                    LISTEN      28350/dockerd
tcp6       0      0 :::22                   :::*                    LISTEN      4475/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      2747/master
tcp6       0      0 :::9090                 :::*                    LISTEN      29069/docker-proxy
[ec2-user@ip-172-31-27-46 ~]$ curl 172.31.27.46:9090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
...

@RushOnline
Copy link

@mrjana not sure for now. 2 weeks before the same sequence as in you example was result "connection reset", for now is ok.

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

So what do we do with this issue? Should we close this? I am not sure what more we can get out of this issue. It is pretty clear the issues various folks are having with port publish is probably not related to docker-proxy listening issues as it is actually listening to both ipv4 and ipv6 addresses if the host is dual-stacked. It is probably due to some other issue involving may be iptable rules.

@RushOnline
Copy link

RushOnline commented Oct 12, 2016

@mrjana I think there are many ways to get this issue, but if user do all right he will not get it. So I think this is not docker bug, but signal to make more diagnosis messages or improve docs. IMHO - you can close it.

@xcellardoor
Copy link

Just chiming in that I'm suffering from this as well when I try to run a Gogs container on Fedora. The container was running fine, I upgraded the system including docker, restarted the container and now when I check the system the following is what's happening:

  • Port 80 is open and listening from Fedora's perspective, bound to 0.0.0.0 (so any interface).
  • When I run 'netstat -tulpn' inside the gogs container itself, it shows that only the IPv6 address is listening. There is no IPv4.

Happy to run tests as people instruct, it's not production or anything it's just a silly git repo I play with.

@saytik
Copy link

saytik commented Oct 25, 2016

I have the same problem on centos 7. I have disabled ipv6 via sysctl. but
tcp6 0 0 :::8080 :::* LISTEN 12969/docker-proxy
tcp6 0 0 :::80 :::* LISTEN 12874/docker-proxy

@inongogo
Copy link

My problem when I encountered this issue was that my firewall was blocking the network that the container was a member of. The firewall had a rule at the bottom of the firewall rules that denied all of the ports and IP addresses that wasn't defined in the list above it. My container was part of the subnet 172.17.0.0/16 so after adding it to the Allow-list in the firewall, all was well.

@saytik
Copy link

saytik commented Oct 26, 2016

this fix the problem:

nano /etc/default/grub
add ipv6.disable=1 at line 6,like:
GRUB_CMDLINE_LINUX="ipv6.disable=1 ..."
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

@xcellardoor
Copy link

xcellardoor commented Oct 26, 2016

I believe I have found part of the problem - FirewallD, I have posted a workaround - see #27491 (comment)

@ricemouse
Copy link

I also have this issue, tried most of the solutions above but it didn't help.

$docker run -p 8080:80 -itd swagger-ui-builder
$netstat -anp|grep 8080
$tcp6 0 0 :::8080 :::* LISTEN -

My system information:
$ docker --version
Docker version 1.12.3, build 6b644ec
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.1 LTS
Release: 16.04
Codename: xenial
$ sysctl net.ipv6.conf.all.disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 1

@xcellardoor
Copy link

@ricemouse Have you tried the command I posted at #27491 (comment) which uses the full IPv4 address you want the container to bind to on the host system?

@cpuguy83
Copy link
Member

Folks, the service in the container must be listening on something other than localhost for port forwarding to work.

Please do not post things like "docker run -p <port>:<port> myCustomimage doesn't work".
If you have a real, reproducible case then we want to see it... but if you have some custom image that no one has access to of course we can't do anything to test it. It's just adding more noise to an already incredibly noisy and ambiguous issue.

@Tahvok
Copy link

Tahvok commented Feb 16, 2017

@cpuguy83 please see this:

ubuntu@neo4j:~$ docker run -d --publish=7474:7474 --publish=7687:7687 --volume=$HOME/neo4j/data:/data --volume=$HOME/neo4j/logs:/logs --volume=$HOME/neo4j/conf:/conf --ulimit=nofile=40000:40000 --name neo4j neo4j:3.1.1-enterprise
9140aa67d2db320bd3875ff54b807d64e7bac62602efb47dcb299bb84de91b11
ubuntu@neo4j:~$ ss -tupln
Netid State      Recv-Q Send-Q                                                           Local Address:Port                                                                          Peer Address:Port              
udp   UNCONN     0      0                                                                            *:68                                                                                       *:*                  
tcp   LISTEN     0      128                                                                          *:22                                                                                       *:*                  
tcp   LISTEN     0      128                                                                         :::7687                                                                                    :::*                  
tcp   LISTEN     0      128                                                                         :::7474                                                                                    :::*                  
tcp   LISTEN     0      128                                                                         :::22                                                                                      :::*                  

Using unmodified official ubuntu cloud image.

Setting net.ipv6.conf.all.forwarding=1 will fix the issue. Didn't try other things, but willing to test anything you might want.

@darthferretus
Copy link

darthferretus commented Feb 28, 2017

Same problem on Ubuntu 16.04.2

$ docker -v
Docker version 1.13.1, build 092cba3
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:10022           0.0.0.0:*               LISTEN      1385/sshd
tcp6        0      0 :::5000            :::*               LISTEN      16794/docker-proxy
tcp6        0      0 :::5005            :::*               LISTEN      62484/docker-proxy
tcp6        0      0 :::80              :::*               LISTEN      62512/docker-proxy
tcp6        0      0 :::22              :::*               LISTEN      62527/docker-proxy
tcp6        0      0 :::443             :::*               LISTEN      62499/docker-proxy

Fixed by

$ nano /etc/default/grub
...
GRUB_CMDLINE_LINUX="ipv6.disable=1"
...
$ update-grub
$ reboot

@geekpete
Copy link

geekpete commented Mar 9, 2017

This ticket is a beast. It's one (or a variation of one) symptom, but multiple possible causes. This needs to be definitively addressed in the documentation.

And are there also some circumstances where people still cannot solve the issue for themselves even with this vast array of possible solutions?

@mostolog
Copy link

mostolog commented Mar 21, 2017

Hi.

CC @cpuguy83
Think I reproduced this issue with 17.03.0-ce.

$docker run --rm --detach --publish 80:80 ubuntu:16.04 tail -f /dev/null
dc469064248406348b0291e31766c66f3cb8a854b157d52f4de0cf3272966f43

$netstat -an | grep 80
tcp6       0      0 :::80                   :::*                    ESCUCHAR   
unix  2      [ ]         DGRAM                    18064    

$ifconfig
docker0   Link encap:Ethernet  direcciónHW XX:XX:XX:XX:XX:XX  
          Direc. inet:172.17.0.1  Difus.:0.0.0.0  Másc:255.255.0.0
          ACTIVO DIFUSIÓN FUNCIONANDO MULTICAST  MTU:1500  Métrica:1
          Paquetes RX:50 errores:0 perdidos:0 overruns:0 frame:0
          Paquetes TX:18 errores:0 perdidos:0 overruns:0 carrier:0
          colisiones:0 long.colaTX:0 
          Bytes RX:2792 (2.7 KB)  TX bytes:900 (900.0 B)
eth0      Link encap:Ethernet  direcciónHW XX:XX:XX:XX:XX:XX  
          Direc. inet:XXX.XXX.XXX.XXX  Difus.:XXX.XXX.XXX.XXX  Másc:255.255.255.240
          ACTIVO DIFUSIÓN FUNCIONANDO MULTICAST  MTU:1500  Métrica:1
          Paquetes RX:5633 errores:0 perdidos:49 overruns:0 frame:0
          Paquetes TX:5018 errores:0 perdidos:0 overruns:0 carrier:0
          colisiones:0 long.colaTX:1000 
          Bytes RX:644526 (644.5 KB)  TX bytes:1219242 (1.2 MB)
lo        Link encap:Bucle local  
          Direc. inet:127.0.0.1  Másc:255.0.0.0
          ACTIVO BUCLE FUNCIONANDO  MTU:65536  Métrica:1
          Paquetes RX:2003 errores:0 perdidos:0 overruns:0 frame:0
          Paquetes TX:2003 errores:0 perdidos:0 overruns:0 carrier:0
          colisiones:0 long.colaTX:1 
          Bytes RX:186799 (186.7 KB)  TX bytes:186799 (186.7 KB)
veth89c76d4 Link encap:Ethernet  direcciónHW 82:ff:f4:42:af:0b  
          ACTIVO DIFUSIÓN FUNCIONANDO MULTICAST  MTU:1500  Métrica:1
          Paquetes RX:14 errores:0 perdidos:0 overruns:0 frame:0
          Paquetes TX:6 errores:0 perdidos:0 overruns:0 carrier:0
          colisiones:0 long.colaTX:0 
          Bytes RX:948 (948.0 B)  TX bytes:300 (300.0 B)

IPv6 was previously installed on system, but not anymore (if that could have something to do).

Should I use interface to publish as @xcellardoor suggested?
Should I open a new issue?

Regards

UPDATE: Seems the service is working/listening on IPv4 but is not shown on netstat. Maybe that is what's expected? (cause host is not listening, but container)

@cpuguy83
Copy link
Member

@mostolog The host is listening since it runs a proxy process for local traffic (hairpinning in particular) and to occupy the port.
I suspect there is something weird with netstat in this case where ipv6 was enabled at some point and where docker is listening on 0.0.0.0

@mostolog
Copy link

@cpuguy83 If interested in fixing this, I could run whatever you may need...

@cpuguy83
Copy link
Member

@mostolog I don't think there's something to fix in such a case.

  1. Assuming there's anything listening on the host side is incorrect. It only does for historical reasons (and the whole hairpinning issue).
  2. Issue is with netstat

In the current default config, traffic routing to published ports has can follow two paths depending on if it's local traffic or external traffic.

Let's take the following container:

# docker run -d -p 80:80 nginx

This yields the following iptables NAT rules:

# iptables -t nat -L -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER     all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    1    60 DOCKER     all  --  any    any     anywhere            !loopback/8           ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MASQUERADE  all  --  any    !docker0  172.18.0.0/16        anywhere
    0     0 MASQUERADE  all  --  any    !docker_gwbridge  172.19.0.0/16        anywhere
    0     0 MASQUERADE  tcp  --  any    any     172.18.0.2           172.18.0.2           tcp dpt:http

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  docker0 any     anywhere             anywhere
    0     0 RETURN     all  --  docker_gwbridge any     anywhere             anywhere
    0     0 DNAT       tcp  --  !docker0 any     anywhere             anywhere             tcp dpt:http to:172.18.0.2:80

This says, all traffic where the destination is local needs to go the the DOCKER chain.

Once it hits the DOCKER chain anything coming from docker0 (the bridge interface) is returned immediately (meaning it skips any further processing in that chain). Since there's nothing else in the PREROUTING chain for docker0 traffic this hits the host network stack.
You'll notice there is a docker-proxy process running. This is for handling traffic from the bridge. docker-proxy listens on port 80 (the requested host port) for the requested interfaces (usually all interfaces) and proxies all requests to the container IP/port in userspace.
This is the local traffic handling and for local traffic only.

You'll notice the DNAT rule at the end of the DOCKER that says anything incoming on port 80 that does not originate from docker0 (the bridge interface on the host that Docker uses) will be routed to the container IP's port 80.
This is for external traffic. This goes directly to the container and does not hit the host network stack (except to be routed through the bridge interface). Even if you had something listening on 80 on the host it would never hit the service if the source matched this rule.

As you can see, doing a curl localhost would not be enough to even test the availability of the service and even if netstat correctly reported that docker-proxy is listening on tcp4, this would not be enough to determine if things are setup correctly as this is for local traffic only.
Note that technically the usage of docker-proxy could even be paired down further to only forward traffic for <source ip>==<destination ip> && interface==<bridge name> (e.g. hairpining), but this makes tons more iptables rules which does incur overhead for incoming packets.
And ideally it wouldn't even exist since the bridge can be setup to enable hairpinning (it's not by default, hence docker-proxy), but there's a pretty nasty kernel bug related to this, so we leave it off by default.

Meanwhile here's my netstat output:

# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp6       0      0 :::2377                 :::*                    LISTEN      7814/dockerd
tcp6       0      0 :::7946                 :::*                    LISTEN      7814/dockerd
tcp6       0      0 :::80                   :::*                    LISTEN      8503/docker-proxy
udp        0      0 0.0.0.0:4789            0.0.0.0:*                           -
udp6       0      0 :::7946                 :::*                                7814/dockerd

TCP6 only... but...

# curl 127.0.0.1 # use the actual v4 address to make sure `localhost` doesn't resolve to a v6 address
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@e426b2319215:/go/src/github.com/docker/docker# curl 127.0.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Even lsof reports this as v6 (likely due to the listen address):

# lsof -n -i TCP
COMMAND    PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
dockerd   7814 root   13u  IPv6 5955398      0t0  TCP *:2377 (LISTEN)
dockerd   7814 root   14u  IPv4 5955400      0t0  TCP 172.17.0.2:55708->172.17.0.2:2377 (ESTABLISHED)
dockerd   7814 root   18u  IPv6 5955401      0t0  TCP 172.17.0.2:2377->172.17.0.2:55708 (ESTABLISHED)
dockerd   7814 root   21u  IPv6 5955403      0t0  TCP *:7946 (LISTEN)
docker-pr 8503 root    4u  IPv6 5957159      0t0  TCP *:http (LISTEN)

I'm going to go ahead and close this issue since there does not seem to be anything to actually fix here.
Thanks all for looking into this.
If forwarding is not working for you I strongly recommend taking a look at the process in the container (and make sure ip forwarding is enabled of course!).

@softwarevamp
Copy link

softwarevamp commented Jul 5, 2017

seems still binding on ipv6 only. netstat -anp | grep docker-proxy | grep LISTEN reports:

[root@ip-172-31-12-254 centos]# netstat -anp | grep docker-proxy | grep -i listen
tcp6       0      0 :::6379                 :::*                    LISTEN      2374/docker-proxy
tcp6       0      0 :::8983                 :::*                    LISTEN      2825/docker-proxy
tcp6       0      0 :::9092                 :::*                    LISTEN      3160/docker-proxy
tcp6       0      0 :::2181                 :::*                    LISTEN      2402/docker-proxy

@perlun
Copy link

perlun commented Dec 8, 2017

@softwarevamp

seems still binding on ipv6 only

It usually works correct even though it looks like it's binding on ipv6 only. I have this scenario on one Docker serve right now, it works just fine. On another one it doesn't, but that's because the service isn't listening on the port inside the container in my case. The tcp6 thing is a side track, unfortunately.

@cpuguy83, I suggest the issue be locked to avoid further noise. If people find new problems, a new issue had better be created anyway.

@thaJeztah
Copy link
Member

Agreed; locking the conversation on this issue.

@moby moby locked and limited conversation to collaborators Dec 9, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/networking exp/expert kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.
Projects
None yet
Development

No branches or pull requests