Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow more than 50 paths routes per domain #837

Closed
mafraba opened this issue Aug 29, 2019 · 18 comments
Closed

Allow more than 50 paths routes per domain #837

mafraba opened this issue Aug 29, 2019 · 18 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@mafraba
Copy link

mafraba commented Aug 29, 2019

Currently it doesn't seem possible to setup more than 50 paths for a given host rule in an ingress.
It's also not possible to have several ingress for one same LB/IP, so one can not route 50+ paths for a single domain.

As far as I can see, the issue seems to be that each path in the ingress spec is being translated to a GCP PathRule with a single path in it.

pathRules = append(pathRules, utils.PathRule{Path: path, Backend: *svcPort})

Paths: []string{rule.Path},

So it's easy to hit the max 50 Path rules per path matcher quota.

But several paths per PathRule seem to be supported by GCP:

type PathRule struct {
// The list of path patterns to match. Each must start with / and the
// only place a * is allowed is at the end following a /. The string fed
// to the path matcher does not include any text after the first ? or #,
// and those chars are not allowed here.
Paths []string `json:"paths,omitempty"`

So if the controller somehow could manage packing several paths into a single PathRule the limit would not be so low.

Some options I can think of:

  1. Allowing a space separated list of paths in the ingress path property

    paths:
    - path: '/posts /posts/* /es/posts/ /es/posts/* /de/posts/ /de/posts/*'
      backend:
        serviceName: post-service
        servicePort: 9876
    

    Then split them on the translation to UrlMap.

  2. Grouping all the paths that point to a same backend under a single PathRule

    paths:
    - path: /posts
      backend:
        serviceName: post-service
        servicePort: 9876
    - path: /posts/*
      backend:
        serviceName: post-service
        servicePort: 9876
    - path: /es/posts
      backend:
        serviceName: post-service
        servicePort: 9876
    - path: /es/posts/*
      backend:
        serviceName: post-service
        servicePort: 9876
    - path: /tasks
      backend:
        serviceName: tasks-service
        servicePort: 5432
    

    Which would result in

    $ gcloud compute url-maps describe someUrlMap
    hostRules:
    - ...
    kind: compute#urlMap
    name: ...
    pathMatchers:
    - defaultService: ...
    name: ...
    pathRules:
    - paths:
      - /tasks
      service: <tasks-service backend url>
    - paths:
      - /posts
      - /posts/*
      - /es/posts
      - /es/posts/*
      service: <post-service backend url>
    selfLink: ...
    

The first one would deviate more from the k8s ingress spec I guess, but I suppose
it is easier to implement and allows a more concise syntax, similar to the one already
used to create path-matchers with the gcloud command (see --path-rules option
in https://cloud.google.com/load-balancing/docs/url-map#path-matchers).

The second one would respect the current syntax for GCE Ingress resources, but
leaves less control to the user, and I guess would be harder to implement.

Sorry if these proposals are too naive somehow, I'm not familiar with the code.
If they make sense, I'd be willing to spend some time trying to put together a PR
with the solution considered more appropriate.

@rramkumar1
Copy link
Contributor

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 9, 2019
@bowei
Copy link
Member

bowei commented Sep 9, 2019

Is this an optimization in how we are translating the config to the UrlMap or something that needs to be changed in the k8s schema itself? The first thing seems like a good optimization to make, the second is probably not easily done.

@mafraba
Copy link
Author

mafraba commented Sep 10, 2019

No modification to the K8s schema. This would be only about how exactly ingress rules get implemented with GCP objects.

That said, I guess option 1 from the proposals would be even further from the HttpIngressPath k8s spec than the current implementation (which seems non-compliant already).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 9, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 8, 2020
@bowei
Copy link
Member

bowei commented Jan 8, 2020

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 8, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 7, 2020
@rramkumar1 rramkumar1 removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 20, 2020
@rramkumar1 rramkumar1 added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 14, 2020
@mark-church
Copy link

Asking here to understand general scale needs around host rules and path rules ... what numbers are generally needed on a per-Ingress basis for your use-cases?

@Toxblh
Copy link

Toxblh commented Oct 28, 2020

@mark-church the same issue, we have an application with 21 modules with self paths and that will grow, at current moment we hit the limit because all modules has 2 paths (like /app /app/*) as base and few apps has additional 2-4 paths (the same /smth and /smth/*) and in general it should 21 rules, but already 49, and we think how to resolve it.

@safarishane
Copy link

I too have a use case where we will need more than the standard 50 paths per domain. Any idea when this will be supported?

@abdidarmawan007
Copy link

ingress-gce awesome but have limit 50 paths per domain, its not good for backend microservice

@DaveWelling
Copy link

DaveWelling commented Dec 16, 2021

It isn't obvious why this limit exists at all. I'm guessing there is some limit further upstream in the GCE infrastructure that drives this? I agree with @abdidarmawan007 that microservices will easily drive consumers over this limit. I also have the problem described by some other posters here where we provide multiple routes per service. We will often provide a /doc route to give swagger information. In our case, 150 routes would probably suffice, but I imagine that number would vary greatly depending on the granularity of the microservices and factors like I've described that might result in multiple routes per service.

@mark-church
Copy link

This was actually a limitation of the underlying load balancer (the URLmap) but this was recently lifted and should no longer be a limitation: https://cloud.google.com/load-balancing/docs/quotas#url_maps

@DaveWelling
Copy link

DaveWelling commented Dec 18, 2021

@mark-church That seems like great news? Is it difficult to remove or raise the restriction then? Or am I misunderstanding and this was not upstream but the actual limit that this issue is reporting? I only ask because my team reported hitting this problem again this past week.

@abdidarmawan007
Copy link

abdidarmawan007 commented Dec 20, 2021

@DaveWelling you need increase URL maps Quota in service Compute Engine API Quota
alt text

@abdidarmawan007
Copy link

abdidarmawan007 commented Dec 20, 2021

as @mark-church said google cloud already increase (Host rules, path matchers per URL map) Limit: 1000
for URL maps still need increase manual via GCP IAM Quotas

@mark-church
Copy link

That is the quota for the URLmap resources per project (there is 1 URLmap per 1 Ingress resource). The quotas that I referenced are hard limits within an individual URLmap and I don't believe any quotas need to be configured to enable them as they are the same for everybody.

@spencerhance I don't believe we do any validation in ingress-gce for the number of host/paths per Ingress right? I believe any errors caused by URLmap limits come from the URLmap resource itself and so these scale changes would be transparent to ingress-gce.

This should be easy to validate though by creating an Ingress with several hundred host or path matches.

@swetharepakula
Copy link
Member

We do not do any validation in ingress-gce on the number of host/paths for ingress. This limitation was purely due to the URLMap quota. Closing as this is not an ingress issue and has been fixed by the underlying GCE infra.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests