Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turning off keepalive does not work as documented #2168

Closed
ghost opened this issue Mar 5, 2018 · 6 comments
Closed

Turning off keepalive does not work as documented #2168

ghost opened this issue Mar 5, 2018 · 6 comments

Comments

@ghost
Copy link

ghost commented Mar 5, 2018

NGINX Ingress controller version:

0.11.0

Kubernetes version (use kubectl version):

v1.8.8

What happened:

The keepalive configuration map directive does not work as documented. The documentation says "Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.". I have keepalive: "0" set in the configmap, and the resulting nginx.conf has this keepalive_timeout 75s;. The setting of zero is ignored, and the default value used instead.

What you expected to happen:

I expected that the zero would be passed along to the config, and keepalive would be turned off.

@ghost
Copy link
Author

ghost commented Mar 5, 2018

This happens too in version: 0.9.0

@oilbeater
Copy link
Contributor

It should be keep-alive not keepalive

@ghost
Copy link
Author

ghost commented Mar 5, 2018

@oilbeater In my case its "properly" spelled out but hey, I'm using an old release

@ghost
Copy link
Author

ghost commented Mar 5, 2018

Huh. Dunno how I managed this (again?). I had it spelled correctly previously since I did successfully reduce the keepalive timeout, but somehow managed to change the value incorrectly when testing disabling it fully.

It does work, but doesn't solve my actual problem (every change that triggers a reload of the ingress controller causes at least a hundred requests to be returned with http status code 000, so similar to but no the same as #489).

@ghost ghost closed this as completed Mar 5, 2018
@ghost
Copy link
Author

ghost commented Mar 5, 2018

@vainu-arto You can try to use services instead of endpoints with the service-upstream annotation.

With it nginx should not reload the configuration after a deployment thus not causing errors on already open connections [ #257, kubernetes-retired/contrib#1123 ]. We had a similar issue (involving keep-alive connections and reloads) and worked like a charm (also, keep an eye on the Dynamic reload implementation #2152).

Using this and the pattern described at the referenced issue #489, your deployments should have near 0 errors.

This is assuming that what's causing trouble is the controller, not the application itself, of course... ;)

@ghost
Copy link
Author

ghost commented Mar 5, 2018

@Pbtg Thanks a lot! I'll look into those links.

My issue happens without touching the application at all (triggering a controller reload by changing any setting is enough), and also isn't cured by disabling keepalive so the issue seems somewhat different from the usual case.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant