-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limit processor #22883
Rate limit processor #22883
Conversation
💔 Build Failed
Expand to view the summary
Build stats
Test stats 🧪
Steps errorsExpand to view the steps failures
|
Test | Results |
---|---|
Failed | 0 |
Passed | 17427 |
Skipped | 1379 |
Total | 18806 |
Hey @jsoriano, I still need to work on some tests and polish the docs in this PR but I think it's in a good enough state for an initial review, when you have some time. Thanks! |
Pinging @elastic/integrations-platforms (Team:Platforms) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good, I think this is going to be a great addition!
I have added some suggestions. Specially I think that we could release a first version without the algorithm options exposed to users. If we see in the future that there is some needing of implementing different algorithms or adding options, we can add them later. But by now, I think it can complicate the docs and the support of the feature.
Thanks for the feedback, @jsoriano. I'm good with not documenting the algorithm options for now and introducing them later when we have more than one algorithm to offer. I think we should leave the options in code, though, just in case we need them after release (related: #22883 (comment)). |
2d7d89f
to
3a27b6a
Compare
@jsoriano I believe I've addressed all your feedback. Please re-review this PR when you have a chance. Thanks! |
a1ec593
to
6dd3fe8
Compare
@jsoriano I've addressed your latest review feedback. Please re-review when you get a chance. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing all the comments! I am still a bit concerned about the concurrency of the GC. Once this is solved I think we are good to go.
@jsoriano I believe I've addressed all the points in the latest feedback round. Please re-review this PR when you get a chance. Thanks! |
f0fd35a
to
58fc271
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some extra comments, nothing really blocking.
Only things I think we should change:
- Release this processor as beta by now.
- Don't register it for the script processor.
Thanks!
@jsoriano Ready for another round of review. Thanks! |
52259e4
to
2b92955
Compare
jenkins, run the tests please |
Packaging failure is unrelated to this change, I can reproduce it in master. I think this change is ready to go, it is quite isolated change in a new processor, that shouldn't affect other things. Failure could be introduced by #21874, I think that packaging jobs were launched on this PR because of the changes in |
@ycombinator I have just seen that the docs page is not included in the docs build, for that it should be added to |
* Implement basic scaffolding for rate_limit processor * Fixing import cycle * Adding skeleton for token bucket algo * Set default algorithm in default config * Using an algo constructor * Implement token bucket rate limiting algorithm * Resolving some TODOs * Adding license header * Removing old TODO comment * Adding tests * Reverting to previous logic * Adding CHANGELOG entry * Adding TODOs for more tests * Fixing comment * Fixing error messages * Fixing comment * Fleshing out godoc comments * Fixing logger location * Fixing up docs a bit * Adding test for "fields" config * WIP: adding test for burst multiplier * Return pointer to bucket from getBucket * Keep pointers to buckets in map to avoid map reassignment * Fix test * Fix logic as we cannot allow withdrawal of fractional (<1) tokens * Move burst multiplier default to token_bucket algo * Implementing GC * Making the factory take the algo config and return the algo * Reduce nesting level * Use mitchellh/hashstructure * Using sync.Map * Removing algorithm and algorithm options from documentation * Mocking clock * Adding license headers * Using atomic.Uints for metrics counters * Fixing logic * Use github.com/jonboulle/clockwork * Running make update * Add mutex to prevent only one GC thread from running at any time * Adding logging * Remove NumBuckets GC threshold * Use non-blocking mutex * Perform actual GC in own goroutine * Running mage fmt * Fixing processor name * Importing rate limit processor * Initialize mutex * Do not register as a JS processor * Remove unused field * Mark processor as beta * Renaming package * Flatenning package hierarchy * Remove SetClock from algorithm interface # Conflicts: # go.mod # go.sum
Thanks, it's highly welcomed but if I understand it correctly there is no throttling happening or buffering/queuing for the dropped messages. As in, the messages are lost. |
Thanks for the feedback, @zez3. For now, we are starting with a rudimentary implementation that'll drop events that exceed the rate limit. In the future we may add other strategies like the ones you suggest, either as options to this processor or as separate processors. |
Is there something emitted that indicates that a rate limit has been exceeded? Thinking either something like an error doc sent to the output, or perhaps something that gets sent to the monitoring UI in kibana. |
Not at the moment, but this is something we plan to do as part of #21020. |
@ycombinator hi, I have some questions here. Thanks a lot. |
Number of events.
It is just dropped. In the future we may enhance this processor or provide a separate processor that re-enqueues rate limited events but due to the architecture of the libbeat processing pipeline this isn't possible to do today. |
What does this PR do?
This PR introduces a new processor,
rate_limit
, to enforce rate limits on event throughput.Events that exceed the rate limit are dropped. For now, only one rate limiting algorithm is supported: token bucket. In the future, additional rate limiting algorithms may be supported.
Usage
Rate limit all events to 10000 /m
Configuration:
Rate limit events from the
acme
Cloud Foundry org to 500 /sConfigurations (each of these are alternatives to one another):
Rate limit events from the
acme
Cloud Foundry org androadrunner
space to 1000 /hConfigurations (each of these are alternatives to one another):
Rate limit events for each distinct Cloud Foundry org to 400 /s
Rate limit events for each distinct Cloud Foundry org and space combination to 20000 /h
Why is it important?
This processor will allow Beats users to restrict the throughput of events through the Beat using a configurable rate limit.
Checklist
I have made corresponding change to the default configuration filesCHANGELOG.next.asciidoc
orCHANGELOG-developer.next.asciidoc
.Related issues