Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to share LB between Ingresses #369

Closed
GGotardo opened this issue Jun 26, 2018 · 76 comments
Closed

Option to share LB between Ingresses #369

GGotardo opened this issue Jun 26, 2018 · 76 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@GGotardo
Copy link

GGotardo commented Jun 26, 2018

I want to organize my cluster into multiples namespaces (app1, app2) and work with Ingress to access each of them. Something like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app1-ing
  namespace: app1
  annotations:
    kubernetes.io/ingress.global-static-ip-name: ip-ingress-backend
spec:
  rules:
  - host: app1-service1.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-1
          servicePort: 80
        path: /service1
 - host: app1-service2.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-2
          servicePort: 80
        path: /service2
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app2-ing
  namespace: app2
  annotations:
    kubernetes.io/ingress.global-static-ip-name: ip-ingress-backend
spec:
  rules:
  - host: app2-service1.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-1
          servicePort: 80
        path: /service1
   - host: app2-service2.example.com
    http:
      paths:
      - backend:
          serviceName: nginx-2
          servicePort: 80
        path: /service2

But when I try to do so, the following error is showed while creating the second Ingress:

googleapi: Error 400: Invalid value for field 'resource.IPAddress': 'xxxx'. Specified IP address is in-use and would result in a conflict., invalid

It tries to create another LB, but it should share the same one, just creating new backends/frontends.

@rramkumar1
Copy link
Contributor

@GGotardo What's preventing you from just giving the ingress in "app-2" namespace a different static-ip?

The ingress-gce controller has no easy way of knowing that you wan't both those ingresses to have the same static-ip. Even if it did, the fact that Ingresses are namespaced means that the controller must respect this separation.

@GGotardo
Copy link
Author

@GGotardo What's preventing you from just giving the ingress in "app-2" namespace a different static-ip?

Actually that's not a problem for me, and this is the way I'm doing on GCP, because I have a small development cluster, but I could have a big cluster with many namespaces. So I need 1 LB and 1 static IP for each one of them.

Thinking on GCP Load Balancer, it could be resolved with a single one and multiples backs/fronts.

Is ingress-gce responsible for creating Load Balancers once I create an Ingress Service?

@rramkumar1
Copy link
Contributor

rramkumar1 commented Jun 27, 2018

Yes, ingress-gce is responsible for creating the LB resources given an Ingress specification.

I see your point but like I said before, this is a very ugly problem to solve in the ingress-gce controller. My suggestion would be to either condense the amount of namespaces you need or have enough static IP's available.

Regardless, this is an interesting use case so I'll leave this open as a feature suggestion.

@rramkumar1
Copy link
Contributor

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 27, 2018
@ashi009
Copy link

ashi009 commented Jul 12, 2018

It's not a rare use case in a large shared k8s cluster (L1/L2 GFE are doing the same).

For our case, different teams may use their own namespaces to manage their deployments and services. It shouldn't be their problem to manage things like public DNS setup, TLS termination, cert renewal, etc.

It's also worth mentioning that this is already supported by many other ingress controllers, eg. traefik, nginx. Though, I don't like the idea of putting a L2 SLB behind the GCLB.

A workaround I can think of would be adding a custom resource type, say IngressFragment, and create a controller to join the fragments into a single ingress resource in a dedicated namespace for gce-ingress-controller to consume.

@toredash
Copy link

This is a feature we also want. Currently we use a mix of nginx controller and gce controller. High volume service gets their own GCE LB, while normal services uses a shared nginx LB.

@ssboisen
Copy link

Another vote for this feature. Another use case is highly dynamic test environments where a new deployment (with ingress) on a per pull request basis is created. It would be very nice if the kubernetes ingress controller for gke would work with multiple ingress resources on the same IP. This way we can define a wild card dns entry that points to this loadbalancer and it would then have paths/hostname mappings based on these individual ingress documents. This is what we do today with the nginx ingress.

@JorritSalverda
Copy link

JorritSalverda commented Nov 1, 2018

Recombining multiple ingresses into one load balancer is something nginx ingress already does and would be extremely useful to have for GCE ingress as well since it allows applications to set themselves up as backend for a particular route, while keeping their manifests otherwise independent from the other application. This would look like

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: web
          servicePort: https
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: api
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /api/*
        backend:
          serviceName: api
          servicePort: https

I assume implementing this leads to questions about a lot of edge cases and on how to stay within the limits of the url map, but in general it's something like 'If it shares hostnames combine them into one load balancer.'

@rramkumar1
Copy link
Contributor

If someone wants to tackle this, we will happily accept an implementation. Keep in mind though that this is a messy problem to solve in the code.

/good-first-issue
/help-wanted

@k8s-ci-robot
Copy link
Contributor

@rramkumar1:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

If someone wants to tackle this, we will happily accept an implementation. Keep in mind though that this is a messy problem to solve in the code.

/good-first-issue
/help-wanted

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Nov 1, 2018
@agadelshin
Copy link
Contributor

I'd like to dive into this issue.

@rramkumar1
Copy link
Contributor

@pondohva Great! Looking forward to the PR.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2019
@rramkumar1 rramkumar1 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 12, 2019
@dreh23
Copy link

dreh23 commented Apr 8, 2019

We are creating a namespace per dev branch and exposing this to the net. We would like to keep the simplicity of a gce ingress. We will run in a quota (money) issue on gcp. A shared ingress would prevent us from using another ingress controller.

@thiagofernandocosta
Copy link

Hi, buddies.
Has anyone idea about this issue ?
I've been thinking workaround this using helm templates and updating my ingress resource, like as mentioned by @JorritSalverda, but I`m not sure about that.

If someone else has any idea or approach I will appreciate that.
Thanks.

@thiagofernandocosta
Copy link

thiagofernandocosta commented Jul 6, 2019

Taking advantage, lol.
Anyone knows if this is a good approach ?
Each deployment I've configured an ManagedCertificate and StaticIp, therefore associating them to my Ingress. I appreciate any help.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wp-{{ .Values.app }}
annotations:
kubernetes.io/ingress.global-static-ip-name: wp-{{ .Values.app }}-external-ip
networking.gke.io/managed-certificates: wp-{{ .Values.app }}-certificate
spec:
rules:
- host: {{ .Values.domain }}
http:
paths:
- path: /*
backend:
serviceName: wp-{{ .Values.app }}
servicePort: 80

@aeneasr
Copy link

aeneasr commented Jul 11, 2019

Recombining multiple ingresses into one load balancer is something nginx ingress already does and would be extremely useful to have for GCE ingress as well since it allows applications to set themselves up as backend for a particular route, while keeping their manifests otherwise independent from the other application. This would look like

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: web
          servicePort: https
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: api
spec:
  tls:
  - hosts:
    - www.mydomain.com
    secretName: tls-secret
  rules:
  - host: www.mydomain.com
    http:
      paths:
      - path: /api/*
        backend:
          serviceName: api
          servicePort: https

I assume implementing this leads to questions about a lot of edge cases and on how to stay within the limits of the url map, but in general it's something like 'If it shares hostnames combine them into one load balancer.'

This is what we're currently dealing with as we're using multiple Helm Charts that each have their own ingress definitions, but we want to combine them under one domain, separated by path. Is there already a workaround here?

edit:// We don't need namespace separation

@retpolanne
Copy link
Contributor

I +1 this issue. We work in a namespace-per-app environment and recently we have hit a limit of 1000 forwarding rules. The solution would be to either aggregate our namespaces (which would be kind of hard for us, given the number of workloads), create another cluster in another project or use ingress-nginx (which means we would lose the benefits of the managed L7 LB).

@blurpy
Copy link

blurpy commented Apr 1, 2020

We currently have on-prem clusters and considering a move to GKE. Using nginx-ingress we have wildcard dns for our domains to allow developers to choose subdomain or context path in their ingress without any other configuration involved. Not being able to reuse an ip-address across ingresses seem to increase complexity by quite alot. Hoping for a solution to this.

@victortrac
Copy link

We currently have on-prem clusters and considering a move to GKE. Using nginx-ingress we have wildcard dns for our domains to allow developers to choose subdomain or context path in their ingress without any other configuration involved. Not being able to reuse an ip-address across ingresses seem to increase complexity by quite alot. Hoping for a solution to this.

@blurpy There's nothing preventing you from using nginx-ingresson GKE to do this today. The nginx-ingress controller will allocate a single GLB with a single public IP address. Set your DNS as a wildcard to to this IP address. Your developers can create as many ingress resources as they want, which can all share this IP address.

@blurpy
Copy link

blurpy commented Apr 2, 2020

@blurpy There's nothing preventing you from using nginx-ingresson GKE to do this today. The nginx-ingress controller will allocate a single GLB with a single public IP address. Set your DNS as a wildcard to to this IP address. Your developers can create as many ingress resources as they want, which can all share this IP address.

Thanks, good to know. I was hoping to use the managed part of GKE as much as possible though, so I'm still hoping for an improvement here. nginx-ingress is a nightmare for both ops and devs because they don't care about backwards compatibility.

@dfernandezm
Copy link

I struggle to understand why GCE ingress is still not in parity with other ingress controllers as of 2020. This is a very desired feature in our workflow as including an ingress as par if a Helm Chart gives a lot of flexibility. Looking forward to the combined power of GLB and incremental ingresses.

@Berndinox
Copy link

Oh, sorry.. seems like i dont got the hole thread.. lol
at least i have the "base" and can play around a bit, i will let you know if i can find any updates regarding your goals.

@jglick
Copy link

jglick commented Dec 10, 2020

create a controller to join the fragments into a single ingress resource

https://github.com/jakubkulhan/ingress-merge

My use case is a slight variant: several Ingress objects (perhaps in the same namespace) with the same host but binding different Services to different paths. ingress-nginx supports this.

@Andrewangeta
Copy link

create a controller to join the fragments into a single ingress resource

https://github.com/jakubkulhan/ingress-merge

My use case is a slight variant: several Ingress objects (perhaps in the same namespace) with the same host but binding different Services to different paths. ingress-nginx supports this.

Any steps on installing without Helm? I'm just using GHActions to do a kubectl apply at the moment but everyone is using helm in a lot of scenarios. I just don't need/want it right now.

@Andrewangeta
Copy link

Also the fact that AWS implemented this already.... kubernetes-sigs/aws-load-balancer-controller#298 (comment)

@tpokki
Copy link

tpokki commented Dec 11, 2020

Hey @tpokki do you know where I could find some tutorial/examples in order to implement that ?

This workaround/setup with traefik works for us (nginx setup has issues, as described above).

Install traefik:

helm install -f traefik-values.yaml traefik stable/traefik
# traefik-values.yaml
fullnameOverride: traefik
rbac:
  enabled: true
serviceType: NodePort
service:
  annotations:
    cloud.google.com/backend-config: '{"default": "backendconfig"}'

Rest of the objects without helm. N.b. you need to allocate the IP address external-lb-address separately.

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: backendconfig
spec:
  timeoutSec: 300
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: external-lb-address
    networking.gke.io/managed-certificates: managedcertificate
spec:
  backend:
    serviceName: traefik
    servicePort: 80
---
apiVersion: networking.gke.io/v1beta2
kind: ManagedCertificate
metadata:
  name: managedcertificate
spec:
  domains:
    - example.com

The individual Ingress resources are annotated with

kubernetes.io/ingress.class: traefik

... to make traefik process them.

@Aut0R3V
Copy link

Aut0R3V commented Jan 7, 2021

Looks like this issue has been up for a long time. Anything I can do here?

@hermanbanken
Copy link

hermanbanken commented Jan 7, 2021

On a slightly related note: we are now working on our own MultiCluster LB controller that will utilize 2 or more existing LBs created by 2 different Kubernetes clusters that will merge the two URLMaps and create BackendServices with global Backends (instance groups from all clusters).

For us this is the easiest way, and cheapest (no Anthos Hub, as we don't need most of Anthos's features).

This controller would be similar to one that can create one LB from multiple Ingress resources, though simpler.

@tnaduc
Copy link

tnaduc commented Jan 27, 2021

+1 for this feature.
We have multiple deployments resided in different repos and it is great to be able to have one ingress resource for each deployment rather than have a long single ingress resource for all deployments. With this feature, we can use google's managed certificate for our LB. Please add this feature.
Many thanks.

@sergeyshevch
Copy link
Member

Are there any updates on it?

@Berndinox
Copy link

Seems like its coming with API-Gateway. A replacement for ingress: #973 (comment)

@bryanlarsen
Copy link

I didn't get this to work with traefik, since it 404's on '/' for me. It has a '/ping' route that'll 200, but that's on a different port, and I couldn't finagle BackendConfiguration to get the health check to work.

But I did get nginx to work. It has a default backend option that'll return a 200 to '/healthz' so that'll work for the health check.

helm3 install -f nginx-values.yaml ingress-nginx ingress-nginx/ingress-nginx
#nginx-values.yaml

controller:
  service:
    type: NodePort
    annotations:
      cloud.google.com/backend-config: '{"default": "backendconfig"}'
  admissionWebhooks:
    enabled: false
defaultBackend:
  enabled: true
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: backendconfig
spec:
  timeoutSec: 300
  healthCheck:
    requestPath: /healthz
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    networking.gke.io/managed-certificates: foo,bar
spec:
  backend:
    serviceName: ingress-nginx-controller
    servicePort: 80

And then you can create a ManagedCertificate and an Ingress for foo and bar, and they'll share the same LoadBalancer.

But in the end it doesn't really solve the underlying problem. Sure, it lets us create separate Ingress's for foo and bar, but we have to list the certificate names in the annotation on the loadbalancer ingress, so it's not significantly more convenient than putting all the ingresses into a single manifest.

@adrian-gierakowski
Copy link

@bryanlarsen you can use the GCLB in L4 proxy mode and terminate tls on the nginx ingress, and instead of using google managed certificates have them automatically created by cert-manager. This would allow you to have services which are fully specified independently of one another and avoid having to modify a shared resource when adding/removing services exposed to the outside world. With nginx ingress you also get more flexibility as you can route based on host name etc, whereas GCLB can only route based on path

@bryanlarsen
Copy link

@adrian-gierakowski this is for a situation where cert-manager cannot be used due to cert-manager/cert-manager#3717

@adrian-gierakowski
Copy link

@slayer
Copy link

slayer commented May 11, 2021

I want to use IAP for multiple ingresses (multiple namespaces) on the shared LB.
It is impossible now, isn't ?

@bowei
Copy link
Member

bowei commented May 11, 2021

I would take a look at https://cloud.google.com/blog/products/containers-kubernetes/new-gke-gateway-controller-implements-kubernetes-gateway-api for a multi-role use case

@Andrewangeta
Copy link

I didn't realize they had a pre release version available?!?!?! ive been waiting for years @bowei thanks for sharing!

@rdxmb
Copy link

rdxmb commented Jul 1, 2021

I cannot believe gce is the only one who does not support this yet...

@rymnc
Copy link

rymnc commented Jul 20, 2021

any updates here?

@CarpathianUA
Copy link

it's strange that at the end of 2021 there is no way to share LB between Ingresses ...

@Andrewangeta
Copy link

@CarpathianUA was just about to comment the same thing yesterday. Supposedly the new gateway API would help with that. But it's not available with GKE Autopilot.

@swetharepakula
Copy link
Member

This is a limitation of the Ingress API. The Gateway API allows for this feature and we will look to support this feature there.

@adrian-gierakowski
Copy link

@swetharepakula how is this completed? Also what is the limitation of the ingress api? Nginx ingress supports multiple ingress objects controlling a single gateway

@mgoodness
Copy link

@swetharepakula how is this completed? Also what is the limitation of the ingress api? Nginx ingress supports multiple ingress objects controlling a single gateway

I think the more accurate explanation is that it's a limitation of the GCE implementation of the Ingress API.

@swetharepakula
Copy link
Member

The Ingress API does not natively support this. A single Ingress is meant to represent a single load balancer. Nginx is completely different architecture and what works there does not necessarily work for ingress-gce. In ingress-gce, we have many other concerns such as namespaces, permissions etc that cannot be expressed clearly in the Ingress API. This feature is one of the core reasons the Gateway API was created. The Gateway API will allow a single LB to be shared and has the necessary permissions and namespaces pieces built into it. This feature will be supported through the Gateway API, but we will not be adding it to Ingress.

@adrian-gierakowski , I was not aware Github had different close options. I will close this as not planned.

@swetharepakula swetharepakula closed this as not planned Won't fix, can't repro, duplicate, stale Jun 14, 2022
@adrian-gierakowski
Copy link

@swetharepakula thank you for the clarification 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests