-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Turning off keepalive does not work as documented #2168
Comments
This happens too in version: 0.9.0 |
It should be |
@oilbeater In my case its "properly" spelled out but hey, I'm using an old release |
Huh. Dunno how I managed this (again?). I had it spelled correctly previously since I did successfully reduce the keepalive timeout, but somehow managed to change the value incorrectly when testing disabling it fully. It does work, but doesn't solve my actual problem (every change that triggers a reload of the ingress controller causes at least a hundred requests to be returned with http status code 000, so similar to but no the same as #489). |
@vainu-arto You can try to use services instead of endpoints with the service-upstream annotation. With it nginx should not reload the configuration after a deployment thus not causing errors on already open connections [ #257, kubernetes-retired/contrib#1123 ]. We had a similar issue (involving keep-alive connections and reloads) and worked like a charm (also, keep an eye on the Dynamic reload implementation #2152). Using this and the pattern described at the referenced issue #489, your deployments should have near 0 errors. This is assuming that what's causing trouble is the controller, not the application itself, of course... ;) |
@Pbtg Thanks a lot! I'll look into those links. My issue happens without touching the application at all (triggering a controller reload by changing any setting is enough), and also isn't cured by disabling keepalive so the issue seems somewhat different from the usual case. |
NGINX Ingress controller version:
0.11.0
Kubernetes version (use
kubectl version
):v1.8.8
What happened:
The keepalive configuration map directive does not work as documented. The documentation says "Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.". I have
keepalive: "0"
set in the configmap, and the resulting nginx.conf has thiskeepalive_timeout 75s;
. The setting of zero is ignored, and the default value used instead.What you expected to happen:
I expected that the zero would be passed along to the config, and keepalive would be turned off.
The text was updated successfully, but these errors were encountered: