-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Ingress Nginx / GLBC conflict #1657
Comments
which version of kubernetes are you running? the annotation will only work post 1.3. |
Kubernetes v1.3.5 on GKE. I've just tried to set the cluster size to 0, then to 2 in order to recreate everything. About the README :
GLBC automatically adds a forwarding rule and Nginx RC doesn't. Is that right ? |
I don't understand why you need the forwarding rule? nginx is a pod, it doesn't even understand cloud providers. Are you trying to run the nginx controller in isolation or GLBC -> nginx -> origin server? Maybe you meant firewall rule ? Nginx does need a fiewall rule, just like any other pod or process running on a vm in your kubernetes cluster.
It should be in the README's of the relevant ingress controllers. We could also add it to some top level doc, but there are so many ingress controllers out there that don't implement it.
It would be great if we made nginx smart about the cloud provider it's running on and autocreate the firewall rule. The challenge there is that: 1. So many cloudproviders 2. you don't always want the firewall rule, espeically if you're going GLBC -> nginx (which we would solve via boolean annotation). We should surface the need for the firewall rule in the nginx controller docs if we don't already, I thoughte we did somewhere. We have an e2e that uses the annotation and it passes continuously, maybe you can diff your setup with that one? it uses this rc: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/testing-manifests/ingress/nginx/rc.yaml, and only augments it with the annotation like: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/ingress_utils.go#L569 |
Sure I meant firewall rule. I tried to run the basic use case described in the nginx RC README, without GLBC. Adding something about the annotation in the nginx RC README would avoid people using both GLBC and nginx RC (just what happened to me). Wouldn't it ?
I think so
I deleted ingress + nginx RC and recreated with the annotation. Everything's ok now. |
Please open a pr if you have time, the suggested clarifications make sense. The easiest way to detect that the pod is runnig on GCP is by performing a
We could also teach the Nginx controller to autodetect CP itself, and create just the firewall rule. This feels more fickle and less useful IMO. |
I created #1672 for clarifications. |
actually the point is to leverage the cloudprovider detection logic already in the master to do the right thing cross platform, but I agree with the additional cost. I think the tradeoff here is document how to do the easy cheap thing by hand (create firewall rule) and make the more useful cross platform abstraction automatic (service.type=lb). The second is going to be useful in production deployments anyway. |
Do you mean that the service.type LB is a more relialable and/or efficient way ? It could also solve an associated problem I'm currently facing. I defined multiple environments through namespaces and used the
What can I do ? Can the creation of a service.type LB resolve this issue ? |
No i was talking about using both a service.type LB AND an ingress controller in a pipeline.
You can only run one of them per node, because there's only 1 port 80/443. You can run multiple if you don't mind that the controller listens on some other port (you need to create another RC with hostPort set to something like 8080). You can run any number of service.type=lb because that's provisioned by a cloudprovider. |
Ok, but I'm not sure about the idea : why would it be usefull for in production deployments ? |
Becaues you get a lot for free just publishing a public ip from a cloudprovider (basic ddos, regional loadbalancing becaues they have global pops) |
at the same time the CP lb is less felxible (no redirects, adding a new svc takes 10m) |
I see, good things indeed. With that configuration, hostPort for niginx rc would not be necessary anymore (service LB/NodePort used)? I also noticed replicas count is 1 in nginx rc config. For high availabilty, would it be usefull to increase this value (avoid single point of failure) ? |
@pdoreau I'm also trying to get the example to work on GKE and just wanted to clarify how you exposed the ingress controller and opened up the firewall? I exposed it as a
where was that how you went about exposing the ingress controller to get everything to work, or is there anything else I need to open up in the firewall? I've also tried to opening |
Nevermind, I got it to work by exposing |
both hostPort 80 and nodePort (as long as you actually create a NodePort Service) should work. 30m sounds like too long, when the firewall-rules create call completes, it should be open. I assume you're just running the raw nginx controller vs nginx controller behind gce ingress controller, in the latter case you will have delay of ~15m till health checks pass. |
Issues go stale after 30d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Hello. I'm trying the basic HTTP example for configuring an Nginx Ingress Contrroller
It looks like both Nginx RC and GLBC are enabled. Here is my describe ingress command output :
I've added the annotation to disable GLBC with "nginx".
I got a 502 response with the first IP (GLBC I guess) and no response from the second.
Is there something else to do to disable GLBC / enable Nginx ?
The text was updated successfully, but these errors were encountered: