Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limit processor #22883

Merged
merged 53 commits into from
Dec 12, 2020
Merged

Conversation

ycombinator
Copy link
Contributor

@ycombinator ycombinator commented Dec 3, 2020

What does this PR do?

This PR introduces a new processor, rate_limit, to enforce rate limits on event throughput.

Events that exceed the rate limit are dropped. For now, only one rate limiting algorithm is supported: token bucket. In the future, additional rate limiting algorithms may be supported.

Usage

Rate limit all events to 10000 /m

Configuration:

processors:
- rate_limit:
   limit: "10000/m"

Rate limit events from the acme Cloud Foundry org to 500 /s

Configurations (each of these are alternatives to one another):

processors:
- rate_limit:
    when.equals.cloudfoundry.org.name: "acme"
    limit: "500/s"
processors:
- if.equals.cloudfoundry.org.name: "acme"
  then:
  - rate_limit:
      limit: "500/s"

Rate limit events from the acme Cloud Foundry org and roadrunner space to 1000 /h

Configurations (each of these are alternatives to one another):

processors:
- rate_limit:
    when.and:
    - equals.cloudfoundry.org.name: "acme"
    - equals.cloudfoundry.space.name: "roadrunner"
    limit: "1000/h"
processors:
- if.and:
  - equals.cloudfoundry.org.name: "acme"
  - equals.cloudfoundry.space.name: "roadrunner"
  then:
  - rate_limit:
      limit: "1000/h"

Rate limit events for each distinct Cloud Foundry org to 400 /s

processors:
- rate_limit:
   fields:
   - "cloudfoundry.org.name"
   limit: "400/s"

Rate limit events for each distinct Cloud Foundry org and space combination to 20000 /h

processors:
- rate_limit:
   fields:
   - "cloudfoundry.org.name"
   - "cloudfoundry.space.name"
   limit: "20000/h"

Why is it important?

This processor will allow Beats users to restrict the throughput of events through the Beat using a configurable rate limit.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Related issues

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Dec 3, 2020
@ycombinator ycombinator changed the title Rate limit processor [WIP] Rate limit processor Dec 3, 2020
@elasticmachine
Copy link
Collaborator

elasticmachine commented Dec 3, 2020

💔 Build Failed

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: ycombinator commented: jenkins, run the tests please

  • Start Time: 2020-12-11T13:15:20.693+0000

  • Duration: 100 min 14 sec

Test stats 🧪

Test Results
Failed 0
Passed 17427
Skipped 1379
Total 18806

Steps errors 1

Expand to view the steps failures

metricbeat-crosscompile - Install Go/Mage/Python/Docker/Terraform 1.14.12
  • Took 0 min 18 sec . View more details on here
  • Description: .ci/scripts/install-tools.sh

Log output

Expand to view the last 100 lines of log output

[2020-12-11T14:09:30.103Z] + git config --global user.name beatsmachine
[2020-12-11T14:09:30.391Z] + go mod download
[2020-12-11T14:09:45.610Z] + .ci/scripts/terraform-cleanup.sh x-pack/metricbeat
[2020-12-11T14:09:45.610Z] + DIRECTORY=x-pack/metricbeat
[2020-12-11T14:09:45.610Z] + FAILED=0
[2020-12-11T14:09:45.610Z] ++ find x-pack/metricbeat -name terraform.tfstate
[2020-12-11T14:09:45.610Z] + for tfstate in $(find $DIRECTORY -name terraform.tfstate)
[2020-12-11T14:09:45.610Z] ++ dirname x-pack/metricbeat/module/aws/terraform.tfstate
[2020-12-11T14:09:45.610Z] + cd x-pack/metricbeat/module/aws
[2020-12-11T14:09:45.610Z] + terraform destroy -auto-approve
[2020-12-11T14:09:47.523Z] random_id.suffix: Refreshing state... [id=G2x4bQ]
[2020-12-11T14:09:47.523Z] random_password.db: Refreshing state... [id=none]
[2020-12-11T14:09:48.901Z] aws_sqs_queue.test: Refreshing state... [id=https://sqs.********.amazonaws.com/627286350134/metricbeat-test-1b6c786d]
[2020-12-11T14:09:48.901Z] aws_db_instance.test: Refreshing state... [id=metricbeat-test-1b6c786d]
[2020-12-11T14:09:48.901Z] aws_s3_bucket.test: Refreshing state... [id=metricbeat-test-1b6c786d]
[2020-12-11T14:09:55.466Z] aws_s3_bucket_metric.test: Refreshing state... [id=metricbeat-test-1b6c786d:EntireBucket]
[2020-12-11T14:09:55.466Z] aws_s3_bucket_object.test: Refreshing state... [id=someobject]
[2020-12-11T14:09:58.755Z] aws_s3_bucket_metric.test: Destroying... [id=metricbeat-test-1b6c786d:EntireBucket]
[2020-12-11T14:09:58.755Z] aws_sqs_queue.test: Destroying... [id=https://sqs.********.amazonaws.com/627286350134/metricbeat-test-1b6c786d]
[2020-12-11T14:09:58.755Z] aws_s3_bucket_object.test: Destroying... [id=someobject]
[2020-12-11T14:09:58.755Z] aws_db_instance.test: Destroying... [id=metricbeat-test-1b6c786d]
[2020-12-11T14:09:59.014Z] aws_s3_bucket_object.test: Destruction complete after 1s
[2020-12-11T14:09:59.014Z] aws_sqs_queue.test: Destruction complete after 1s
[2020-12-11T14:09:59.014Z] aws_s3_bucket_metric.test: Destruction complete after 1s
[2020-12-11T14:09:59.014Z] aws_s3_bucket.test: Destroying... [id=metricbeat-test-1b6c786d]
[2020-12-11T14:09:59.950Z] aws_s3_bucket.test: Destruction complete after 1s
[2020-12-11T14:10:09.927Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 10s elapsed]
[2020-12-11T14:10:19.907Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 20s elapsed]
[2020-12-11T14:10:29.902Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 30s elapsed]
[2020-12-11T14:10:39.878Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 40s elapsed]
[2020-12-11T14:10:49.855Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 50s elapsed]
[2020-12-11T14:10:59.831Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 1m0s elapsed]
[2020-12-11T14:11:09.810Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 1m10s elapsed]
[2020-12-11T14:11:19.785Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 1m20s elapsed]
[2020-12-11T14:11:29.863Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 1m30s elapsed]
[2020-12-11T14:11:39.840Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 1m40s elapsed]
[2020-12-11T14:11:49.817Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 1m50s elapsed]
[2020-12-11T14:11:59.798Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 2m0s elapsed]
[2020-12-11T14:12:09.777Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 2m10s elapsed]
[2020-12-11T14:12:19.757Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 2m20s elapsed]
[2020-12-11T14:12:29.734Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 2m30s elapsed]
[2020-12-11T14:12:39.713Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 2m40s elapsed]
[2020-12-11T14:12:49.691Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 2m50s elapsed]
[2020-12-11T14:12:59.668Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 3m0s elapsed]
[2020-12-11T14:13:09.672Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 3m10s elapsed]
[2020-12-11T14:13:19.649Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 3m20s elapsed]
[2020-12-11T14:13:29.628Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 3m30s elapsed]
[2020-12-11T14:13:39.607Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 3m40s elapsed]
[2020-12-11T14:13:49.585Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 3m50s elapsed]
[2020-12-11T14:13:59.710Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 4m0s elapsed]
[2020-12-11T14:14:09.687Z] aws_db_instance.test: Still destroying... [id=metricbeat-test-1b6c786d, 4m10s elapsed]
[2020-12-11T14:14:11.064Z] aws_db_instance.test: Destruction complete after 4m13s
[2020-12-11T14:14:11.064Z] random_password.db: Destroying... [id=none]
[2020-12-11T14:14:11.064Z] random_id.suffix: Destroying... [id=G2x4bQ]
[2020-12-11T14:14:11.064Z] random_id.suffix: Destruction complete after 0s
[2020-12-11T14:14:11.064Z] random_password.db: Destruction complete after 0s
[2020-12-11T14:14:11.064Z] 
[2020-12-11T14:14:11.064Z] Destroy complete! Resources: 7 destroyed.
[2020-12-11T14:14:11.064Z] + cd -
[2020-12-11T14:14:11.064Z] /var/lib/jenkins/workspace/Beats_beats_PR-22883/src/github.com/elastic/beats/src/github.com/elastic/beats
[2020-12-11T14:14:11.064Z] + exit 0
[2020-12-11T14:14:11.395Z] Client: Docker Engine - Community
[2020-12-11T14:14:11.395Z]  Version:           19.03.14
[2020-12-11T14:14:11.395Z]  API version:       1.40
[2020-12-11T14:14:11.395Z]  Go version:        go1.13.15
[2020-12-11T14:14:11.395Z]  Git commit:        5eb3275d40
[2020-12-11T14:14:11.395Z]  Built:             Tue Dec  1 19:20:17 2020
[2020-12-11T14:14:11.395Z]  OS/Arch:           linux/amd64
[2020-12-11T14:14:11.395Z]  Experimental:      false
[2020-12-11T14:14:11.395Z] 
[2020-12-11T14:14:11.395Z] Server: Docker Engine - Community
[2020-12-11T14:14:11.395Z]  Engine:
[2020-12-11T14:14:11.395Z]   Version:          19.03.14
[2020-12-11T14:14:11.395Z]   API version:      1.40 (minimum version 1.12)
[2020-12-11T14:14:11.395Z]   Go version:       go1.13.15
[2020-12-11T14:14:11.395Z]   Git commit:       5eb3275d40
[2020-12-11T14:14:11.395Z]   Built:            Tue Dec  1 19:18:45 2020
[2020-12-11T14:14:11.395Z]   OS/Arch:          linux/amd64
[2020-12-11T14:14:11.395Z]   Experimental:     false
[2020-12-11T14:14:11.395Z]  containerd:
[2020-12-11T14:14:11.395Z]   Version:          1.3.9
[2020-12-11T14:14:11.395Z]   GitCommit:        ea765aba0d05254012b0b9e595e995c09186427f
[2020-12-11T14:14:11.395Z]  runc:
[2020-12-11T14:14:11.395Z]   Version:          1.0.0-rc10
[2020-12-11T14:14:11.395Z]   GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
[2020-12-11T14:14:11.395Z]  docker-init:
[2020-12-11T14:14:11.395Z]   Version:          0.18.0
[2020-12-11T14:14:11.395Z]   GitCommit:        fec3683
[2020-12-11T14:14:16.840Z] Scheduling project: Beats » Beats Packaging » PR-22883
[2020-12-11T14:14:21.230Z] Starting building: Beats » Beats Packaging » PR-22883 #51
[2020-12-11T14:55:32.065Z] [INFO] For detailed information see: https://beats-ci.elastic.co/job/Beats/job/packaging/job/PR-22883/51/display/redirect
[2020-12-11T14:55:33.385Z] Running in /var/lib/jenkins/workspace/Beats_beats_PR-22883/src/github.com/elastic/beats
[2020-12-11T14:55:34.166Z] Running on Jenkins in /var/lib/jenkins/workspace/Beats_beats_PR-22883
[2020-12-11T14:55:34.198Z] [INFO] getVaultSecret: Getting secrets
[2020-12-11T14:55:34.266Z] Masking supported pattern matches of $VAULT_ADDR or $VAULT_ROLE_ID or $VAULT_SECRET_ID
[2020-12-11T14:55:34.925Z] + chmod 755 generate-build-data.sh
[2020-12-11T14:55:34.926Z] + ./generate-build-data.sh https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/Beats/beats/PR-22883/ https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/Beats/beats/PR-22883/runs/42 FAILURE 6013964
[2020-12-11T14:55:34.926Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/Beats/beats/PR-22883/runs/42/steps/?limit=10000 -o steps-info.json
[2020-12-11T14:55:38.154Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/Beats/beats/PR-22883/runs/42/tests/?status=FAILED -o tests-errors.json
[2020-12-11T14:55:38.705Z] INFO: curl https://beats-ci.elastic.co/blue/rest/organizations/jenkins/pipelines/Beats/beats/PR-22883/runs/42/log/ -o pipeline-log.txt

💚 Flaky test report

Tests succeeded.

Expand to view the summary

Test stats 🧪

Test Results
Failed 0
Passed 17427
Skipped 1379
Total 18806

@andresrc andresrc added the Team:Platforms Label for the Integrations - Platforms team label Dec 4, 2020
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Dec 4, 2020
@ycombinator ycombinator changed the title [WIP] Rate limit processor Rate limit processor Dec 7, 2020
@ycombinator
Copy link
Contributor Author

Hey @jsoriano, I still need to work on some tests and polish the docs in this PR but I think it's in a good enough state for an initial review, when you have some time. Thanks!

@ycombinator ycombinator marked this pull request as ready for review December 7, 2020 13:18
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-platforms (Team:Platforms)

@ycombinator ycombinator requested a review from jsoriano December 7, 2020 13:18
Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking good, I think this is going to be a great addition!

I have added some suggestions. Specially I think that we could release a first version without the algorithm options exposed to users. If we see in the future that there is some needing of implementing different algorithms or adding options, we can add them later. But by now, I think it can complicate the docs and the support of the feature.

libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit_test.go Outdated Show resolved Hide resolved
@ycombinator
Copy link
Contributor Author

Thanks for the feedback, @jsoriano. I'm good with not documenting the algorithm options for now and introducing them later when we have more than one algorithm to offer. I think we should leave the options in code, though, just in case we need them after release (related: #22883 (comment)).

@ycombinator ycombinator force-pushed the lb-processor-rate-limit branch from 2d7d89f to 3a27b6a Compare December 8, 2020 07:50
@ycombinator
Copy link
Contributor Author

@jsoriano I believe I've addressed all your feedback. Please re-review this PR when you have a chance. Thanks!

@ycombinator ycombinator requested a review from jsoriano December 8, 2020 12:30
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/clock/clock.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/config.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
@ycombinator ycombinator force-pushed the lb-processor-rate-limit branch from a1ec593 to 6dd3fe8 Compare December 8, 2020 15:28
@ycombinator
Copy link
Contributor Author

@jsoriano I've addressed your latest review feedback. Please re-review when you get a chance. Thanks!

@ycombinator ycombinator requested a review from jsoriano December 8, 2020 15:28
Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing all the comments! I am still a bit concerned about the concurrency of the GC. Once this is solved I think we are good to go.

libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/token_bucket.go Outdated Show resolved Hide resolved
@ycombinator
Copy link
Contributor Author

@jsoriano I believe I've addressed all the points in the latest feedback round. Please re-review this PR when you get a chance. Thanks!

@ycombinator ycombinator requested a review from jsoriano December 10, 2020 03:22
@ycombinator ycombinator force-pushed the lb-processor-rate-limit branch from f0fd35a to 58fc271 Compare December 10, 2020 04:20
@ycombinator ycombinator added needs_backport PR is waiting to be backported to other branches. v7.11.0 v8.0.0 labels Dec 10, 2020
Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some extra comments, nothing really blocking.

Only things I think we should change:

  • Release this processor as beta by now.
  • Don't register it for the script processor.

Thanks!

libbeat/processors/rate_limit/algorithm/algorithm.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/algorithm/algorithm.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/config.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/rate_limit.go Outdated Show resolved Hide resolved
libbeat/processors/rate_limit/docs/rate_limit.asciidoc Outdated Show resolved Hide resolved
@ycombinator
Copy link
Contributor Author

@jsoriano Ready for another round of review. Thanks!

@ycombinator ycombinator force-pushed the lb-processor-rate-limit branch from 52259e4 to 2b92955 Compare December 11, 2020 01:13
@ycombinator
Copy link
Contributor Author

jenkins, run the tests please

@jsoriano
Copy link
Member

Packaging failure is unrelated to this change, I can reproduce it in master. I think this change is ready to go, it is quite isolated change in a new processor, that shouldn't affect other things.

Failure could be introduced by #21874, I think that packaging jobs were launched on this PR because of the changes in go.mod, and they were not launched in the other PR 🤔

@ycombinator ycombinator merged commit 523d119 into elastic:master Dec 12, 2020
@ycombinator ycombinator deleted the lb-processor-rate-limit branch December 12, 2020 19:47
@ycombinator ycombinator removed the needs_backport PR is waiting to be backported to other branches. label Dec 12, 2020
@jsoriano
Copy link
Member

@ycombinator I have just seen that the docs page is not included in the docs build, for that it should be added to libbeat/docs/processors-list.asciidoc.

ycombinator added a commit that referenced this pull request Dec 14, 2020
* Implement basic scaffolding for rate_limit processor

* Fixing import cycle

* Adding skeleton for token bucket algo

* Set default algorithm in default config

* Using an algo constructor

* Implement token bucket rate limiting algorithm

* Resolving some TODOs

* Adding license header

* Removing old TODO comment

* Adding tests

* Reverting to previous logic

* Adding CHANGELOG entry

* Adding TODOs for more tests

* Fixing comment

* Fixing error messages

* Fixing comment

* Fleshing out godoc comments

* Fixing logger location

* Fixing up docs a bit

* Adding test for "fields" config

* WIP: adding test for burst multiplier

* Return pointer to bucket from getBucket

* Keep pointers to buckets in map to avoid map reassignment

* Fix test

* Fix logic as we cannot allow withdrawal of fractional (<1) tokens

* Move burst multiplier default to token_bucket algo

* Implementing GC

* Making the factory take the algo config and return the algo

* Reduce nesting level

* Use mitchellh/hashstructure

* Using sync.Map

* Removing algorithm and algorithm options from documentation

* Mocking clock

* Adding license headers

* Using atomic.Uints for metrics counters

* Fixing logic

* Use github.com/jonboulle/clockwork

* Running make update

* Add mutex to prevent only one GC thread from running at any time

* Adding logging

* Remove NumBuckets GC threshold

* Use non-blocking mutex

* Perform actual GC in own goroutine

* Running mage fmt

* Fixing processor name

* Importing rate limit processor

* Initialize mutex

* Do not register as a JS processor

* Remove unused field

* Mark processor as beta

* Renaming package

* Flatenning package hierarchy

* Remove SetClock from algorithm interface
# Conflicts:
#	go.mod
#	go.sum
@ycombinator
Copy link
Contributor Author

Thanks @jsoriano. I've created #23096 to include the processor doc in the processors list.

@zez3
Copy link

zez3 commented Dec 15, 2020

Events that exceed the rate limit are dropped

Thanks, it's highly welcomed but if I understand it correctly there is no throttling happening or buffering/queuing for the dropped messages. As in, the messages are lost.
A slow down throttling would be the preferred way or a retry from where I left(started dropping) mechanism.

@ycombinator
Copy link
Contributor Author

Thanks for the feedback, @zez3. For now, we are starting with a rudimentary implementation that'll drop events that exceed the rate limit. In the future we may add other strategies like the ones you suggest, either as options to this processor or as separate processors.

@matschaffer
Copy link
Contributor

matschaffer commented Dec 17, 2020

Is there something emitted that indicates that a rate limit has been exceeded? Thinking either something like an error doc sent to the output, or perhaps something that gets sent to the monitoring UI in kibana.

@jsoriano
Copy link
Member

Is there something emitted that indicates that a rate limit has been exceeded? Thinking either something like an error doc sent to the output, or perhaps something that gets sent to the monitoring UI in kibana.

Not at the moment, but this is something we plan to do as part of #21020.

@chensheng0
Copy link

@ycombinator hi, I have some questions here.
One: Is this rate limit limit depend on bytes of events, or number or events?
Two: If the event were dropped, this event will resend by libbeat queue or just dropped?

Thanks a lot.

@ycombinator
Copy link
Contributor Author

One: Is this rate limit limit depend on bytes of events, or number or events?

Number of events.

Two: If the event were dropped, this event will resend by libbeat queue or just dropped?

It is just dropped. In the future we may enhance this processor or provide a separate processor that re-enqueues rate limited events but due to the architecture of the libbeat processing pipeline this isn't possible to do today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Platforms Label for the Integrations - Platforms team v7.11.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants