-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SSL proxying accidentally stop working #2354
Comments
Do you have the flag |
Yes,
|
@aledbf also question, do we need |
Some clarifications, correct me if i'm wrong. First, i was able to do SSL passthru without One more time, correct me if i'm wrong.
As outcome: If you're running ingress controller behind AWS ELB or HAProxy, you should consider enabling PROXY protocol for load balancers instead of passing Enabling |
just for future searchers, clarifying that the above conclusion is incorrect as the TCP proxy also does SNI - see linked #2540 |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version: 0.12.0
Kubernetes version (use
kubectl version
): 1.9.5Environment:
uname -a
): 4.14.19-coreosWhat happened:
Accidentally on two clusters with different nginx-ingress-controller versions(0.12.0 and 0.9.0-beta-11) SSL proxying stopped to work. So all connections to port just stuck 443. Inside pod i see that TCP queue is 32Kb and there are 26K CLOSE_WAIT tcp connections to port 443.
Restart of the pod helps.
What you expected to happen:
SSL proxying works reliably and can handle intermittent issues (if any). At least whole ingress controller should fail if SSL proxy is broken.
How to reproduce it (as minimally and precisely as possible):
To be honest i have no 100% recipe how to reproduce it. In our case, on two customer clusters this happenned almost at the same time, so i suspect this can be related to some ingress resources or services (endpoints) reconfiguration. Clusters are in two AWS regions, so i'm pretty sure this is not some environement specific issue (e.g. network loss).
Anything else we need to know:
I see many connections like that (CLOSE_WAIT means that ingress-controller got FIN from client, but does not close the socket for some reason):
32Kb in receive queue.
SSL proxy setup happens here and finally it calls Handle function. I suspect smth can happen in these functions.
The text was updated successfully, but these errors were encountered: