Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multiple addresses (including IPv6) #87

Closed
nikhiljindal opened this issue Dec 21, 2017 · 40 comments
Closed

Support multiple addresses (including IPv6) #87

nikhiljindal opened this issue Dec 21, 2017 · 40 comments
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@nikhiljindal
Copy link
Contributor

nikhiljindal commented Dec 21, 2017

The ingress spec supports specifying an ip address with the kubernetes.io/ingress.global-static-ip-name annotation, but the ingress-gce controller assumes that it is an ipv4 IP address.

GCLB supports specifying both an ipv4 and ipv6 IPs as per: https://cloud.google.com/compute/docs/load-balancing/http/cross-region-example.

Are there plans to support ipv6?
I tried to find an existing issue, but didnt and hence am filling this. Feel free to close as duplicate if there is an existing issue.

cc @bowei @nicksardo @csbell

@nikhiljindal
Copy link
Contributor Author

One potential way to support it could be to add another annotation kubernetes.io/ingress.global-static-ipv6-name and the controller will then handle it appropriately?

Anything else I am missing?

@nicksardo
Copy link
Contributor

I've looked at the code and don't see any assumptions on IPv4, but I agree it would be nice to support both simultanously. Instead of making it a separate annotation, what if we make that annotation CSV capable? Any reason why we shouldn't support N addresses per ingress? Only allowing 1 ipv4 and 1 ipv6 seems unnecessarily restrictive if the GCLB can handle it...

@nikhiljindal
Copy link
Contributor Author

Yes good point. Will try to play with multiple IP addresses with GCLB and see how it goes.

FWIW, this was in response to user feedback on kubemci where users want to specify both an ipv4 and an ipv6 address.

@aaron-trout
Copy link

Are there any issues preventing this field being migrated from an annotation to a proper item in the IngressSpec somewhere? Attaching one or more static IP addresses to the load balancer seems like it would be useful across providers / not google cloud specific.

One that springs to mind now that I am writing this is non-provider ingress controllers like nginx or traefik...

@thockin
Copy link
Member

thockin commented Mar 8, 2018

@aaron-trout The issue is that there's no universal way to describe that (is it an IP value or a named IP in some control plane) nor is it implementable on all (or even most) ingresses.

@nicksardo nicksardo changed the title IPv6 support with ingress-gce controller Support multiple addresses (including IPv6) May 4, 2018
@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed enhancement labels Jun 5, 2018
@bmhatfield
Copy link

Hey there - thanks for maintaining ingress-gce :-)

I just wanted to chime in and say that I've also hit this issue but have been unable to determine a workaround. Perhaps the second ingress using an IPv6 is appropriate?

@mofirouz
Copy link

I'd really like to have this feature where an Ingress/LB setup via Kube will have both an IPv4 and IPv6 address. Most apps/games that are developed for iOS require IPv6 for App Store submission. This is a hard requirement from Apple.

Can we use this as a valid request to push for supporting IPv6 LB frontend support via Kube?

@sijnc
Copy link

sijnc commented Aug 27, 2018

We use loadBalancerSourceRanges to restrict access to staging environments. We're starting to see residential ISPs issuing IPv6 addresses and we are unable to provide stage access to these clients because of this issue. I suspect we'll start seeing even more IPv6 in the future making the problem even worse. We really need this ASAP so another +1 for getting this working in GKE. I'm ok with using an annotation until a proper fix is found. @thockin

@thockin
Copy link
Member

thockin commented Sep 18, 2018 via email

@rramkumar1
Copy link
Contributor

/good-first-issue
/help-wanted

@k8s-ci-robot
Copy link
Contributor

@rramkumar1:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/good-first-issue
/help-wanted

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Oct 31, 2018
@adrianlop
Copy link

no news here guys? we need simultaneous IPv4 and IPv6 in the same Ingress/GCP LB too
will try creating a second Ingress for now.

@agadelshin
Copy link
Contributor

I'd like to take this issue.

@sammy
Copy link

sammy commented Dec 18, 2018

+1 this would be very helpful.

For the time being we manually assign a second ip to the loadbalancer created via the GCP console

@abevoelker
Copy link

This would be very nice to have. As it stands, cert-manager, a popular TLS certificate solution on GKE due to lack of managed certs, runs into difficulties when trying to do IPv4 + IPv6 for the same host on two separate Ingresses.

༼ つ ◕_◕ ༽つ @pondohva take my energy ༼ つ ◕_◕ ༽つ

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 18, 2019
@bowei
Copy link
Member

bowei commented Mar 18, 2019

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 18, 2019
@purificant
Copy link

+1 this would be very helpful

@Arachnid
Copy link

IPv6 / multiple IP support would be really useful, and long overdue.

@nicholasklem
Copy link

Optimistically tried

kubernetes.io/ingress.global-static-ip-name: "api-customer-ipv4,api-customer-ipv6"

no luck yet.

@bowei
Copy link
Member

bowei commented Oct 22, 2019

We are tracking this in the backlog. It looks at first blush straightforward to support...

@bowei
Copy link
Member

bowei commented Dec 23, 2019

/assign

@nikars
Copy link

nikars commented Apr 17, 2020

Hi! Any progress / ETA for this?

@Darkspirit
Copy link

Darkspirit commented May 28, 2020

This bug is the root cause why Mozilla doesn't support IPv6 for most services.

@WesleyVestjens
Copy link

Although it's not really obvious, it's actually possible to make the same resource available through dual-stack IPv4 and IPv6. We've accomplished this by creating 2 ingresses pointing at the same resource:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-project-name-ingress-ipv4
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "ipv4-static-address"
    networking.gke.io/managed-certificates: my-certificate,my-other-certificate
spec:
  backend:
    serviceName: my-service-name
    servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-project-name-ingress-ipv6
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "ipv6-static-address"
    networking.gke.io/managed-certificates: my-certificate,my-other-certificate
spec:
  backend:
    serviceName: my-service-name
    servicePort: 80

@nappy
Copy link

nappy commented Jul 16, 2020

make the same resource available through dual-stack IPv4 and IPv6

Since you will need 2 configurations, 2 ingresses and are charged 2 times, I would not agree with the term dual-stack.
I would call that double-stack.

@agadelshin
Copy link
Contributor

Also you will have 2 load balancers in GCP

@WesleyVestjens
Copy link

I agree, it's not an ideal solution, but for those who absolutely need it, it offers a way to get it done for now.

@avasani
Copy link

avasani commented Oct 19, 2020

What is the status of this issue? Is anyone working on this?

@nicholasklem
Copy link

Any news?

@bowei
Copy link
Member

bowei commented Apr 20, 2021

PRs are welcome -- we are looking into the prioritization for this feature.

@joelsdc
Copy link

joelsdc commented Aug 5, 2021

What do you guys need from the community to get this prioritized?

On a side note, we've tried adding manually (via GUI / gcloud) an IPv6 front-end to the GKE-Ingress created load balancer and it seems to work, in our case we also use self-managed SSL certs and when we patch the ingress.gcp.kubernetes.io/pre-shared-cert annotation to make an update the changes are not applied to the load balancer front-ends, I think that with google-managed SSL certs it might work, however this workaround is ugly and unreliable at best.

@nzapponi
Copy link

nzapponi commented Oct 7, 2021

+1

@leonelvsc
Copy link

Any update ?

@olivierboucher
Copy link

@bowei I'm looking to work on this. Would the new annotation being proposed by @nikhiljindal be the way to go?

@swetharepakula
Copy link
Member

With the introduction of the Gateway API, we will look to add dual stack support there.

@swetharepakula swetharepakula closed this as not planned Won't fix, can't repro, duplicate, stale Jun 14, 2022
@koenpunt
Copy link

koenpunt commented May 4, 2023

With the introduction of the Gateway API, we will look to add dual stack support there.

How is the Gateway API solving this?

@jeremyvisser
Copy link

Because the Gateway API has explicit support for multiple addresses: https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.Gateway

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
spec:
  addresses:
  - value: '10.0.0.1'
  - value: '2001:db8::1'

@koenpunt
Copy link

koenpunt commented Nov 6, 2023

While we do like to use the Kubernetes Gateway API, the lack of Cloud CDN support still prevents us from using it.

@willianmga
Copy link

Still facing the issue of adding a second IP to the Ingress. And none of the workarounds suggested works because:

  • Migrating to Gateway API won't be possible because it does not have support for cloud CDN
  • Using two different Ingresses won't work because CertManager only supports setting up one of them for the http01 challenge. If you setup the ipv4 ingress, then let's encrypt will not issue the certificate because it will reach the ipv6 ingress by default. If you setup the ipv6 ingress, then CertManager won't be able to get its self check done because it can't reach an IPv6 IP from an IPv4 cluster.

Anybody still having similar issues that makes the workarounds unsuitable?

Any thoughts on how these challenges could be solved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests