-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Configuration options for upstream keepalive requests and timeout #3099
Comments
I am not sure If that could fix what I am experiencing in prod (Nginx behind a Google Cloud LB), but I have a lot of 502 errors (in the "stackdriver logging" console) with the following reason "backend-connection-closed-before-data-sent-to-client". I read and applied everything mentioned in this article: https://blog.percy.io/tuning-nginx-behind-google-cloud-platform-http-s-load-balancer-305982ddb340 (section |
I’d be surprised if these directives helped with that case since these are for upstream keepalive (in your case upstream is nginx). But the issue sounds like the same just between GCLB and Nginx. I’d double check and make sure timeout in gclb is less than configured timeout in nginx. Does it correlate with ingress-nginx deploys? |
I have a couple of high load microservices that use the "gce-ingress" and don't experience this issue. Yeah I'd love this to be a misconfig but so far nothing pops out, I just wanted to see if any body else experienced this issue. Anyway, I now get that this feature request is for upstream servers, not clients, thanks. |
@JordanP for downstream connections you can already do it using https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#keep-alive |
@ElvinEfendi I can add this via a ConfigMap change. |
@diazjf that would be great! |
Allows Upstream Keepalive values like keepalive_timeout and keepalive_requests to be configured via ConfigMap. Fixes kubernetes#3099
Nginx 1.15.3 introduced
keepalive_requests
andkeepalive_timeout
forngx_http_upstream_module
. We currently haveupstream-keepalive-connections
to configure the number of keepalive connections to upstream. We should also support these two new directive.One of the use cases is when the client keepalive timeout in the backend is less than the time Nginx waits for upstream keepalive connections. In this scenario the upstream can close the connection but Nginx would still think it's open and proxy a request through it, which result in "Connection reset by peer" error. https://theantway.com/2017/11/analyze-connection-reset-error-in-nginx-upstream-with-keep-alive-enabled/ has more info on this.
NGINX Ingress controller version:
Kubernetes version (use
kubectl version
):Environment:
uname -a
):What happened:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
The text was updated successfully, but these errors were encountered: