-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HTTP->HTTPS redirect does not work with use-proxy-protocol: "true" #808
Comments
@jpnauta please check if the latest beta solves the issue (0.9-beta.10) |
@aledbf I have this problem in beta.10 too. |
I think It may not the problem here. |
@acoshift in the configmap use |
@aledbf ok, right now I remove custom template, and set |
@acoshift that's strange because gcp does not supports proxy protocol for http, only https. |
Here's all configs.
Some logs in nginx pod
|
Please change the gcp lb to http. In that mode the load balancer sends the X-Forwarded-For |
tyvm for helping me, but for my use-case, I can not use gcp lb http because I want ingress controller to handle TLS (from kube-lego). Right now I have to use custom template for workaround. My knowledge is very limited but I doubt this line
"TLS sni server send source IP on port 442", maybe set |
@aledbf Unfortunately upgrading to <HEAD><TITLE>Server Hangup</TITLE></HEAD>
<BODY BGCOLOR="white" FGCOLOR="black">
<FONT FACE="Helvetica,Arial"><B> |
@jpnauta I cannot reproduce this error. Not sure where are you running but this is the full script to provision a cluster in aws. Create a cluster using kops in us-west export MASTER_ZONES=us-west-2a
export WORKER_ZONES=us-west-2a,us-west-2b
export KOPS_STATE_STORE=s3://k8s-xxxxxx-01
export AWS_DEFAULT_REGION=us-west-2
kops create cluster \
--name uswest2-01.xxxxxxx.io \
--cloud aws \
--master-zones $MASTER_ZONES \
--node-count 2 \
--zones $WORKER_ZONES \
--master-size m3.medium \
--node-size m4.large \
--ssh-public-key ~/.ssh/id_rsa.pub \
--image coreos.com/CoreOS-stable-1409.5.0-hvm \
--yes Create the echoheaders deployment echo "
apiVersion: v1
kind: Service
metadata:
name: echoheaders
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoheaders
spec:
replicas: 1
template:
metadata:
labels:
app: echoheaders
spec:
containers:
- name: echoheaders
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echoheaders-nginx
annotations:
kubernetes.io/ingress.class: nginx
spec:
tls:
- hosts:
- echoheaders.uswest2-01.rocket-science.io
secretName: echoserver-tls
rules:
- host: echoheaders.uswest2-01.xxxxx-xxxx.io
http:
paths:
- backend:
serviceName: echoheaders
servicePort: 80
" | kubectl create -f - Create the nginx ingress controller $ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/aws/nginx/nginx-ingress-controller.yaml Install kube-lego
Configure kube-lego
Install
Run the tests* $ curl -v echoheaders.uswest2-01.rocket-science.io
* Rebuilt URL to: echoheaders.uswest2-01.rocket-science.io/
* Trying 52.32.132.20...
* TCP_NODELAY set
* Connected to echoheaders.uswest2-01.rocket-science.io (52.32.132.20) port 80 (#0)
> GET / HTTP/1.1
> Host: echoheaders.uswest2-01.rocket-science.io
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.13.2
< Date: Thu, 06 Jul 2017 01:54:57 GMT
< Content-Type: text/html
< Content-Length: 185
< Connection: keep-alive
< Location: https://echoheaders.uswest2-01.rocket-science.io/
< Strict-Transport-Security: max-age=15724800; includeSubDomains;
<
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.13.2</center>
</body>
</html>
* Curl_http_done: called premature == 0 Delete the cluster kops delete cluster --name uswest2-01.xxxxxxxx.io --yes |
@jpnauta if you are running in GCE or GKE you cannot enable proxy protocol because it only works with HTTPS |
Ahhh okay good to know 👍 I'm on GKE, thanks for your help @aledbf |
FYI if you want to configure a load balancer manually (not for the faint of heart). You can workaround this limitation by sharing an External IP between the L7 GLBC Ingress w/ a custom HTTP backend to redirect all traffic to HTTPS, and another manually created L4 LB w/ TCP Proxy Protocol for your HTTPS traffic (to the Nginx Ingress Controller). |
There is the same issue on Azure (AKS). Redirection doesn't work. |
@aledbf since proxy-protocol doesn't work over HTTP in GKE, is it possible to get the client IP with GCE's TCP load balancer and ssl-passthrough with proxy-protocol disabled? |
@anurag not sure. If you want to test this please make sure you use |
I've got same problem. The problem is that I can't disable proxy protocol because if i'll disable it, I will break client IP detection for https (because I need to use ssl-passthrough for some backends). The only way I see now is to use haproxy for proxying traffic for 80 port using proxy-protocol. Maybe we can add two different config annotation to enable proxy protocol listening for 80 and 443 ports separately |
@aledbf ^ |
When running nginx ingress on GKE (with a TCP Load Balancer), the only way to get real client IP is to turn on proxy protocol. However, it will stop http->https redirection. In fact, http requests will end up with empty response on client side, and broken header on nginx side. Confirmed the issue still exists with the latest release 0.17.1. My solution is:
Voila. |
@coolersport Is that with the regional or global TCP LB? |
It is regional in my case. However, this solution address it at nginx layer, nothing to do with GCP LB. In fact, LB is auto-provisioned by GKE. |
That's weird, I get real IPs from my regional GCP TCP LB in both x-forwarded-for and x-real-ip, without use-proxy-protocol. You just have to guard it against spoofing with proxy-real-ip-cidr. The global TCP LB doesn't pass through IPs in x-forwarded-for though. |
I am having this same problem with 0.19 @coolersport I have tried your approach but i believe that it relies on the GCP TCP Proxy which is only available for global, not regional static IP's and Forwarding rules. Here is an overview of our setup, we have 900+ static IPs for our network, each of these have a manually created regional Forwarding rules (80-443) targeting all required instance groups. We have 10 nginx-ingress controllers, each with 100+ static IP's configured via ExternalIPs on the service. (This was Google designed and suggested due to a hardcoded limitation of 50 live health checks per cluster) We use cert-manager (updated version of kube-lego) to automatically provision certs with ingress annotations. Everything in this scenario works aside from getting the clients actual IP into our app. If we enable "use-proxy-protocol" in our configMap, then we immediately start getting "broken header:" error messages, I've tried every combination of proxy-real-ip-cidr possible with no results. We Cannot re-provision all 900+ static IPs as global due to multiple issues iincluding quota and the fallout of propagation across all of the domains. Looking for any help we can get at all. |
@artushin what version of ingress-nginx are you using where you don't need to use proxy protocol? |
@Spittal I'm on 0.13.0 but, again, that works only on regional TCP LBs. I don't know if it works with proxy protocol on a global LB but it definitely doesn't without it. |
Hi all, even if i turn on the
in the controller i still receive errors and the server does not respond (HTTPS call)
the result for HTTP call has the same error but you seethe content of the requests + headers. any idea why I can't make it work even with the flag set? PS: I'm using the helm version |
I just noticed that it works in a cluster which uses secure-backend. When setting up a new cluster which has no SSL-passthru, broken header issue reappears. |
@coolersport I know this is from a while ago but I am having this exact issue. I am quite new to kubernetes and I wonder if you could clarify how you set up the sidecar? This has been driving me crazy for 2 weeks now! |
@roboticsound, here they are. Sorry, I can't post full YAML files. Hope this gives you the idea.
|
@coolersport Thanks! helped a lot |
In case somebody didn't see the better solution: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip |
@dano0b maybe I'm missing something but I configured In my opinion, the best solution right now is the one that @coolersport providedÇ UPDATED After disabled |
Do we have like a standard way of doing this? |
This is a real headache, I've followed https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip I've modified our configMap apiVersion: v1 Still nothing only showing the local ip. |
For those using
without |
Interestingly it seems to work with |
it helps, thanks! |
I am currently using
gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.7
. I was having issues as #277, but that issue is marked as resolved. My ingress would work properly withhttps://
, but would return an empty response withhttp://
. This is what happened when I tried to cURL my domain:When I changed the
use-proxy-protocol
configuration fromtrue
tofalse
, the curl worked correctly.Here is my original config map to reproduce the situation:
The text was updated successfully, but these errors were encountered: