-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.2.0 fails to start: [haproxy.main()] Cannot raise FD limit to 8094, limit is 1024 #134
Comments
With
But at a glance maybe it makes more sense to change maxconn and maxsock to 1000 instead - it should be more than enough for docker-socket-proxy use case. |
I prefer adding |
We have reverted to previous version of HAProxy, so this problem shouldn't be anymore. |
For context, Soft limit should stay at Docker from v25 I think has fixed this with their systemd config files, and will instead inherit the limits from the system (which in most cases on linux will be the systemd default) and the container will inherit that. You can override this per container too if you need to. Containerd 2.0 needs to be released before it's related config is fixed too. And then a follow-up Docker release that upgrades to using containerd 2.0. After that point it should be less of a problem to think about 😅 I am a bit curious about your environment as to why you were getting limits of |
Nothing fancy (rootless/outdated/etc.). Just no systemd, so 1024/4096 because kernel defaults wasn't modified. I'm using Gentoo linux and Runit as boot/service manager. |
Oh... then just manage limits with runit config? There's not really anything a project can do when it tries to do the right thing but you forbid it. While There is some software that doesn't play nicely like Envoy. Last I checked there was no documentation about the expectation for high limit (over a million), and instead of handling it at runtime they expect you to have the soft limit raised before running Envoy. They usually deploy via containers I think where they've been able to leverage the bug there with So HAProxy was recently version bumped from 2.2 (July 2020 and EOL) to 3.0 (May 2024), and that was reverted due to a misconfigured niche environment? 🤔 FWIW, I'm assuming Docker daemon / containerd will also be running with that That eventually becomes a problem, and was why the limit was initially raised in 2014 for those services since operations like I believe they improved on that minimizing the issue, but some other issue that was difficult to troubleshoot raised the limits further to You have nothing to worry about raising the hard limit to a reasonable amount like |
Nope, the earlier comment about the revert lacked context, the revert was for another reason.
Context (it'll be switched back to
When that update lands, should you have the limits issue follow advice above. You can probably explicitly configure |
It turns out docker won't allow |
What version of Docker are you running for reference? I think when I looked into it a year or so ago I was able to lower the hard limit and possibly raise it above the From memory if the process spawns children they can use as many FD as their soft limit allows (it doesn't contribute to parents or siblings in a process group AFAIK), but the hard limit could not be raised above what the parent had, only lowered further. However I think containers were created without the daemon as the parent which allowed raising the hard limit beyond the daemons, but perhaps something has changed or I'm not recalling that correctly 😅 |
26.1.0
Sure, you can lower it, but you can't raise it.
root (a real one, outside of a container) can raise it above parent's limit |
It might be because I was using systemd instead of runit. IIRC the containers were being added into a separate systemd slice (might have been a customization on my end at the time) than the one the daemon was operating in. Or it may have been from the |
There is a related issue haproxy/haproxy#1866, except I'm using amd64.
0.1.2 works:
0.2.0 does not work:
The text was updated successfully, but these errors were encountered: