Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the maximum number of listeners in Gateway CRD configurable #2869

Closed
bravenut opened this issue Mar 13, 2024 · 10 comments
Closed

Make the maximum number of listeners in Gateway CRD configurable #2869

bravenut opened this issue Mar 13, 2024 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@bravenut
Copy link

What would you like to be added:
I would like the gateway CRD template to be improved by making the maxItems on the gateway listeners to be configurable instead of it being hard coded to be 64 items (see experimental gateway CRD).

Why this is needed:
Our current setup uses an "edge" gateway (Kubernetes Gateway API gateway) routing requests to other "inner" gateways based on the path prefix. We currently have around 30 inner gateways which the edge gateway routes requests to, but we will have more in the future, so we will hit the hard coded limit of 64 max really soon.

@bravenut bravenut added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 13, 2024
@shaneutt shaneutt added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 13, 2024
@howardjohn
Copy link
Contributor

There is a lot of discussion around similar topics in #1863.

IMO, I strongly agree with the need to have >64. But a few notes

  • Just raising the limit is probably not sufficient, as realistically its not going to be so on the order of the largest users requirements (lets say 10k listeners)
  • A more scalable approach is to split out the listeners, as the merging GEP above does
  • Having different limits in different CRDs is probably a bad precedent to set; there should be 1 single API definition across users/clusters/implementations

cc @robscott

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2024
@youngnick
Copy link
Contributor

Hi @bravenut, I don't really understand how you're doing this, since a Gateway is not a valid targetRef in any implementation I know about.

The intended design here is for the Gateway to hold summarized Listeners (most likely bounded by a wildcard certificate or something, so you could have two listeners, one for *.example.com and one for *.example.net, and then have HTTPRoutes claim more specific domains beneath those listeners, using morespecific.example.com to attach to only the *.example.com listener, for example).

We'd need some example config of what you want to do here to understand your request better. Remember that Listeners are only required to be distinct on the following fields:

  • port
  • protocol
  • TLS details
  • optionally Hostname

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 18, 2024
@kghost
Copy link

kghost commented Sep 3, 2024

/reopen

@k8s-ci-robot
Copy link
Contributor

@kghost: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kghost
Copy link

kghost commented Sep 3, 2024

The intended design here is for the Gateway to hold summarized Listeners (most likely bounded by a wildcard certificate or something, so you could have two listeners, one for *.example.com and one for *.example.net, and then have HTTPRoutes claim more specific domains beneath those listeners, using morespecific.example.com to attach to only the *.example.com listener, for example).

Sometimes, they can not be summarized, for example, to build a CDN, we probably need hundreds of thousands listeners, with different TLDs.

@youngnick
Copy link
Contributor

I think that this use case is probably better addressed with something like #3213 - although I still think that having hundreds of thousands of listeners on a single Gateway is way outside what our current scaling design takes into account, so it's likely that there will be scale problems we haven't considered.

Also, if you're terminating TLS or otherwise wanting to do inspection on the HTTP metadata at the Gateway implementation point, then using hostname in HTTPRoute and attaching lots of HTTPRoutes to a single Listener is probably a slightly better design.

More generally, it's generally going to be way more scalable to attach lots of HTTPRoutes to a Gateway than it is to have lots of Listeners (if you can easily summarize the listeners on the same port or something).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

7 participants