This repository has been archived by the owner on Apr 27, 2021. It is now read-only.
forked from kubernetes/ingress-nginx
-
Notifications
You must be signed in to change notification settings - Fork 25
log a warning when updateCh is full #13
Open
ElvinEfendi
wants to merge
31
commits into
master
Choose a base branch
from
log-when-update-channel-is-full
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Revert to using proxy_protocol_addr for real ip
Enable CI builds
Fix container not booting
Real IP Fix from Upstream
…g by nginx variable
Introduce an upstream-hash-by annotation to support consistent hashing by nginx variable
Fix regressions from the initial pass of the upstream-hash-by annotation
Sync upstream
make sure http host is passed to upstream
Define pipeline build steps
Nicer `pipa build` usage
Sync upstream
Hm, the whole RingBuffer idea sounds fishy to me TBH, but this whole issue seems to require understanding of informers and store and how they are used along with the controller channels, all of which I lack atm :/. |
if one thing can monopolize the update queue I imagine the proper fix would be an event scheduler. kind of like having a queue per resource and pulling them off in some sort of "fair" manner. A ring buffer will discard things, but what events will it discard? If it's the thing that's spamming then sure, fuck them. If it's the actual update we want a ring buffer will just cause the same problem. |
oh and this repo is public btw, is this what we want for these discussions? |
ElvinEfendi
changed the title
log a wardning when updateCh is full
log a warnning when updateCh is full
Mar 8, 2018
ElvinEfendi
changed the title
log a warnning when updateCh is full
log a warning when updateCh is full
Mar 8, 2018
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
NOTE: this is mostly at an idea phase and WIP
Recently there was an issue reported at kubernetes#2022 where the controller was not able to start Nginx. This was because there were too many events and
updateCh
was full and writes were failing. The the author fixed it at kubernetes#2082 where he uses RingChannel instead. That channel will never block writes and instead drop the oldest value when it is full. And he uses 1024 as a buffer size again. But there can be cases where dropping an event might lead to incorrect behaviour:Imagine there are 2 namespaces generating events. A resource in n1 generated an event and at the same time something was on fire in n2 where it generated 1024 all of a sudden which would mean the controller will drop the event that happened for n1, so it will never regenerate the Nginx config for that namespace.
The PR adds a logging so that we can at least see when this edge case happens.
@zrdaley @Shopify/edgescale
/cc @Shopify/cloudplatform