Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A single app should not be able to break ingerss-nginx for the whole cluster #3588

Closed
ElvinEfendi opened this issue Dec 20, 2018 · 12 comments
Closed

Comments

@ElvinEfendi
Copy link
Member

We have seen cases where a configuration for a single app got ingress-nginx stuck for the whole cluster. One of them was using

auth-signin:  https://$host/oauth2/start?rd=$escaped_request_uri

annotation based on ingress-nginx docs. But the problem was we were running an older ingress-nignx version that does not define $escaped_request_uri variable. This ended up ingress-nginx not being able to apply any new Nginx config change for the whole cluster since the config test was failing.

We have to come up with a way to avoid these situation where a single app can break ingress-nginx for all the other apps.

Related issues: #3435, #3579

@ghouscht
Copy link

ghouscht commented Jan 4, 2019

A possible solution for this problem could be to render a config for each ingress definition and dynamically include it (if valid) instead of rendering all services into one huge config. I think this could be a quite reliable solution for this problem. Anyway this is a huge amount of work (as it is more or less a "core" change how nginx-ingress works) and will probably break other things.

(I had no time to verify this solution in any way, this just came up as an idea and I thaught I'll put it here for discussion)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2019
@kfox1111
Copy link

kfox1111 commented Apr 4, 2019

We have generally used opa to put in place restrictions on what annotations could be used to solve this kind of thing. This particular case though, is interesting. It could probably be handled via opa, but might be fairly hard to do...

@Bessonov
Copy link

Bessonov commented Apr 4, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 4, 2019
@aledbf
Copy link
Member

aledbf commented Apr 4, 2019

We have generally used opa to put in place restrictions on what annotations could be used to solve this kind of thing.

The fix for this issue is #3802

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2019
@Bessonov
Copy link

Bessonov commented Jul 3, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 1, 2019
@Bessonov
Copy link

Bessonov commented Oct 1, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 1, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2019
@Bessonov
Copy link

/remove-lifecycle stale

It's fixed?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2019
@aledbf
Copy link
Member

aledbf commented Feb 11, 2020

Closing. The fix for this issue is #3802

@aledbf aledbf closed this as completed Feb 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants