Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Listen only on localhost by default #3002

Closed
wants to merge 1 commit into from

Conversation

berezovskyi
Copy link

⚠️⚠️⚠️ Since we do not accept all types of pull requests and do not want to waste your time. Please be sure that you have read pull request rules:
https://github.com/louislam/uptime-kuma/blob/master/CONTRIBUTING.md#can-i-create-a-pull-request-for-uptime-kuma

Tick the checkbox if you understand [x]:

  • I have read and understand the pull request rules.

Description

Update docker-compose.yml to avoid a common footgun, as Docker would punch your firewall to expose this port without asking you first.

Type of change

  • Bug fix (non-breaking change which fixes an issue)

Related to security but not high-severity enough to follow https://github.com/louislam/uptime-kuma/blob/master/SECURITY.md

Checklist

  • My code follows the style guidelines of this project
  • I ran ESLint and other linters for modified files
  • I have performed a self-review of my own code and tested it
  • I have commented my code, particularly in hard-to-understand areas
    (including JSDoc for methods)
  • My changes generate no new warnings
  • My code needed automated testing. I have added them (this is optional task)

Update docker-compose.yml to avoid a common footgun, as Docker would punch your firewall to expose this port without asking you first.
@Computroniks
Copy link
Contributor

Is this really required? I would have thought that it should be fairly obvious if you spin up the docker container, that you expect it to be accessible across the network, not just from localhost. For example, apache2 automatically starts listening on port 80 when installed. I worry that if we make this change, we would end up with a bunch of issues stating that the UI isn't accessible because they haven't read the docs / changed the file.

@berezovskyi
Copy link
Author

berezovskyi commented Mar 31, 2023

You are right, there is a risk of degraded user experience but I think it's a worthy change from the security standpoint, maybe adding a comment would be good.

However, the analogy with Apache2 is not (fully) correct. If I have a firewall on my Linux server (ufw is the most popular choice on Ubuntu) and I install apache2, it starts to listen on port 80 but... nothing happens, I can't reach the server. This is because the port is not open in the firewall, I need to open it with something like sudo ufw allow 80/tcp and only then can I reach apache2. Docker, however, will "punch" your firewall by modifying the iptables to open a given port.

Actually, my use case is precisely to be able to run Uptime Kuma behind apache2 (or, in my case, Caddy, as it deals with TLS certs nicely). The first time I set things up, I started the web server on port 80, opened port 80 in the ufw firewall config and ran a Docker service on a high port, while having the web server act as a reverse proxy, adding basic auth and TLS. I was expecting the service to be available publicly through port 80 only, before I discovered by accident that the service was also reachable through the high port. I double-checked the firewall and saw that the port should still be blocked.

If Docker would not punch firewall, I would not have filed this PR. There is a long running issue moby/moby#22054 but Docker refused to change anything. And there are many beginners who are not aware that Docker punches the firewall for them. I know no other software you can install on Ubuntu that does this.

@Computroniks
Copy link
Contributor

Interesting, I didn't realise that docker literally modifies the system firewall when it loads an app. That really doesn't seem like a good idea.

@louislam
Copy link
Owner

louislam commented Apr 2, 2023

I would have thought that it should be fairly obvious if you spin up the docker container, that you expect it to be accessible across the network, not just from localhost.

Understood the concern, but I have the same thought as @Computroniks said. I think people will be confused why Uptime Kuma is inaccessible, and they eventually go into the yaml file and delete the 127.0.0.1. Also in this way, I believe the IPv6 support will be disabled.

I would recommended that it should change to a comment with explanation. Uncomment it when they need it.

In addition, I think this issue is not just Uptime Kuma. It is across the docker and selfhosted applications community. I also don't know it too until now (Luckily, my VPS providers provide me a second layer of firewall). It maybe good to let more people know it on https://www.reddit.com/r/selfhosted/

@berezovskyi
Copy link
Author

I would recommended that it should change to a comment with explanation. Uncomment it when they need it.

I will update the PR.

@louislam louislam added the question Further information is requested label Apr 9, 2023
@stefanux
Copy link

right now there is no documentation how to archive that in manual installs or at least a mention how to do that in the compose-file.
instead it is mentioned that it runs on localhost:3001 which is not true.

possibility to bind 127.0.0.1 (or ::1) - for the UI! - is a basic feature, unencrypted http connections should only be allowed to redirect to https. not sure how (or if) nodejs can handle different bindings for checks and UI but it seems useful to have different methods. Or at least force SSL encryption.

user-expectance is another matter, default binding on 0.0.0.0 (or ::) is still possible (and likely adviseable for novice users who do not understand compose files), but

Also in this way, I believe the IPv6 support will be disabled

IPv6 is shitty in docker anyway, either you can use it only to reach the UI (but not for checking v6-only-services!) or you have to configure pure v6only which is only possible with public v6-adresses (no NAT) with "fixed-cidr-v6" or implement your own NAT with ULA (fd00...) addresses with is even uglier than that.
thats another story in k8s (k3s, ...), but thats not the officially support deployment anyway.

long story short: if you want uptime kuma in dualstack mode (without more ore less ugly docker hacks) with you need to run it manually in the host.

@polarathene
Copy link

This is because the port is not open in the firewall, I need to open it with something like sudo ufw allow 80/tcp and only then can I reach apache2.
Docker, however, will "punch" your firewall by modifying the iptables to open a given port.

AFAIK, Docker is effectively doing the equivalent but with a NAT rule to direct traffic to the containers IP + port. Since it's an iptables rule, not UFW specific it's not unreasonable that UFW isn't aware of it (especially since it's scoped to the DOCKER chain?).

I think it easier to view it this way. UFW is just a frontend, it does not have exclusive control of the firewall.

I started the web server on port 80, opened port 80 in the ufw firewall config and ran a Docker service on a high port, while having the web server act as a reverse proxy, adding basic auth and TLS.
I was expecting the service to be available publicly through port 80 only, before I discovered by accident that the service was also reachable through the high port.
I double-checked the firewall and saw that the port should still be blocked.

When you mention a "Docker service on a high port", I assume this has been published with -p or ports:?

You have a few options to handle that.

  • You could instead run Caddy in a container, and not publish ports for your other containers, they'll be reachable over the same shared Docker network.
  • You could also use a plugin for Caddy (caddy-docker-proxy) or Traefik which provides similar container proxying functionality.
  • Be explicit as you've done with configuration in this PR, binding to loopback interface 127.0.0.1.

This is more of a learning gotcha with UFW, and not only caused by Docker AFAIK? It's just you're not using other software that offers similar capabilities.

@polarathene
Copy link

Also in this way, I believe the IPv6 support will be disabled.

Just to clarify, if the host system can be reached by IPv6, by default that gets routed through an IPv4 gateway of a network interface Docker manages. Containers then see the gateway address instead of the original client IP due to NAT64 IIRC.

You can configure Docker to enable IPv6 (experimental, and is opt-in per network and each compose file creates it's own separate default network), giving containers IPv6 addresses (ULAs). This retains the IPv6 IP connection address.


Firewall modification

I didn't realise that docker literally modifies the system firewall when it loads an app.

AFAIK it doesn't (UFW isn't the actual firewall itself), not intentionally at least.

It's a feature of Docker that you opt-in for explicitly via a CLI option (docker run -p) or config setting (compose ports:). If a user is following instructions blindly online without understanding what they're doing (and making assumptions), that really isn't the fault of the software?

Who's really to blame?

It's fairly obvious early on with Docker that the ports get exposed publicly like this.

I understand it may seem surprising when it's not playing nice with other software that also configures the firewall, but AFAIK it's no different to inserting a snippet into your .bashrc / .zshrc or similar (some software knows what it wants to do, but won't know what else exists in there for sure, and if it'll conflict).

Is it a bug with Docker? Or a bug with UFW? Who should accommodate the other? (not accounting for other unknowns that disrupt that further)

Bad workaround advice

There is some advice to configure rules via UFW to workaround the compatibility issue initially. However when UFW allows opening a port on the host which also belongs to IPs of a container(s)... it accidentally exposes all containers with that port internally mapped via the associated published port on host interface(s) (which the workaround intended to keep closed). So then you get other workarounds, but not likely what users wanted (doesn't support ufw CLI).

I've not got enough expertise in the area to know for certain, but presumably if Docker gets around to implementing support for managing the firewall via nftables (instead of relying on shims for compatibility with the legacy iptables rules / syntax), it could more reliably avoid the issue? (from what I've read, it's an issue with how iptables rules are managed).

That is not without caveats though, using both iptables and nftables, the nftables docs note it requires at least kernel 4.18, and for stateful NAT at least kernel 5.2. You'll find some products like NAS are behind that, or complicating things with their own backport patches and other modifications.


What's actually going on

Presently both UFW and FirewallD support iptables and nftables. Ubuntu 23.04 (UFW) and Fedora 37 (FirewallD) are both using iptables-nft.

Either Firewall frontend will not block the below container from being reachable on public networks, even though you'd normally need to allow the port explicitly for software.

$ docker run --rm -p 8000:80 traefik/whoami

$ docker ps
CONTAINER ID   IMAGE            COMMAND     CREATED         STATUS         PORTS
1d4da56c490a   traefik/whoami   "/whoami"   4 seconds ago   Up 4 seconds   0.0.0.0:8000->80/tcp, :::8000->80/tcp

Why

  • This is because the user is familiar with other software only binding to a port, not needing to manipulate the firewall (which Docker has only done because of -p).
  • If instead port mapping wasn't used, and the container ran on the host network with --network host, it'd be similar to what a user expects with most software.
  • The Dockers docs do clarify the default binding makes it publicly accessible (including UFW rules not being applicable).

How to fix

The referenced docs also direct the reader to a more in-depth page detailing how Docker interacts with iptables. There is advice on:

  • Disabling this behaviour with --iptables=false (but now you'll have to explicitly identify a containers IP each time and manually forward the ports, while accepting the risk of troubleshooting more networking woes)
  • Changing the default interface address Docker binds to instead of whatever is available via 0.0.0.0.
  • Integration with other services that interface with iptables, or adding additional constraints for Docker.

With FirewallD you can configure the docker zone, while UFW presumably lacks an equivalent feature for Docker to integrate with. A firewall frontend agnostic way of course is just to configure the Docker daemon to do what you'd prefer the default to be.

For most that actually is to bind on 0.0.0.0 and explicitly provide an IP for the few containers they don't want publicly reachable. If you don't want the ports published this way, use the host network, Traefik, a VM, or disable Dockers iptables management and DIY.

@polarathene
Copy link

unencrypted http connections should only be allowed to redirect to https. not sure how (or if) nodejs can handle different bindings for checks and UI but it seems useful to have different methods. Or at least force SSL encryption.

Use a reverse proxy like nginx, caddy or traefik. They specialize at this.

Caddy is simple to configure and will provision your TLS certs for you, along with default HTTP to HTTPS redirect (unless you're explicitly making the service reachable only via http:// / :80 or providing only an IP or localhost).


long story short: if you want uptime kuma in dualstack mode (without more ore less ugly docker hacks) with you need to run it manually in the host.

What is ugly about using ULA?

If you're using Dockers networking with private range IPv4 addresses, I don't see why ULA isn't acceptable?

  1. Make sure you have these 3 settings enabled in /etc/docker.json (Enables IPv6 NAT, instead of default NAT64):

    {
      "ip6tables": true,
      "experimental" : true,
      "userland-proxy": true
    }

    Apply the new settings with systemctl restart docker.

  2. Create a network with an IPv6 subnet and run a container that uses it:

    docker network create --ipv6 --subnet fd00:face:feed:cafe::/64 example-ipv6
    docker run --rm -d -p 80:80 --network example-ipv6 traefik/whoami

    or with docker-compose.yaml:

    services:
      test:
        image: traefik/whoami
        ports:
          - '80:80'
    
    networks:
      # Overrides the default network created + attached to each service above:
      default:
        enable_ipv6: true
        ipam:
          config:
            - subnet: fd00:face:feed:cafe::/64 
    
    # To use the `daemon.json` default bridge with IPv6 instead of creating the above custom `default` network,
    # Add `network_mode: bridge` to a service instead.
  3. Test it:

    # EXTERNALLY_FACING_IP could be your IPv6 address for your server,
    # eg: `http://[2001:19f0:7001:13c9:5400:4ff:fe41:5e06]`
    curl -s "http://${EXTERNALLY_FACING_IP}" | grep RemoteAddr
    
    # Alternatively if you have DNS AAAA record setup and need to force IPv6:
    curl -s6 "http://example.com" | grep RemoteAddr

When you test from another machine, the clients IPv6 address is visible to the container as the RemoteAddr. Without the IPv6 NAT (ip6tables: true) you'd get an IPv4 gateway IP from Docker instead IIRC.

But that is it, a few settings to enable once, and then creating a network with IPv6 enabled with a ULA subnet. Not sure what is ugly about that?

@gwisp2
Copy link

gwisp2 commented Apr 18, 2023

127.0.0.1:3001:3001 binding does not actually protect from accessing containers from an external network.
For such port mapping, docker creates two iptables rules:

  1. table nat, used in PREROUTING: if daddr is 127.0.0.1 and dport is 3001, DNAT to CONTAINER_IP:3001
  2. table filter, used in FORWARD: if daddr is CONTAINER_IP and dport is 3001, ACCEPT

Any other hosts in the same LAN can tinker with their routes and connect to CONTAINER_IP:3001 bypassing the DNAT rule and the second rule accepts such connection attempt.
However, the workaround in the PR still is better than nothing, at least it protects from the hosts outside LAN.

@polarathene
Copy link

Any other hosts in the same LAN can tinker with their routes and connect to CONTAINER_IP:3001 bypassing the DNAT rule and the second rule accepts such connection attempt.

Do you have an example of commands?

How does it compare to just using network_mode: "host"? Perhaps that's a better choice for the concern?

@gwisp2
Copy link

gwisp2 commented Apr 19, 2023

Sorry, I posted the link to the wrong issue. Just fixed my previous comment, here is the correct link: moby/moby#14041. An example of commands is shown in the issue description.

network_mode: "host" solves the issue as long as the application is listening on 127.0.0.1.
You can't use the same routing trick to access ports on a remote 127.0.0.1 address because the kernel handles specially a case with a loopback destination address, you will see something like 'packet dropped, martian destination 127.0.0.1' in the kernel logs (dmesg).

@polarathene
Copy link

polarathene commented Apr 19, 2023

here is the correct link

Cheers! 👍

network_mode: "host" solves the issue as long as the application is listening on 127.0.0.1.

AFAIK it'd be like running the software outside of the container, so if that would normally bind to 0.0.0.0 or 127.0.0.1, then running the software inside a container (with --network host / network_mode: host), it would be equivalent.

Alternatively, you don't publish any ports for the container and just have a reverse proxy route via the container network. If host mode network isn't what the user wants, they'd probably want the reverse proxy option and they should know how to configure that as it's not specific to this project 👍

@CommanderStorm CommanderStorm mentioned this pull request Jul 1, 2023
@louislam
Copy link
Owner

louislam commented Jul 15, 2023

Close as I want to keep 3001:3001. Thanks for the pr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants