Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a memory limiter processor #498

Conversation

tigrannajaryan
Copy link
Member

@tigrannajaryan tigrannajaryan commented Jan 13, 2020

This adds a processor that drops data according to configured memory limits.
The processor is important for high load situations when receiving rate exceeds exporting
rate (and an extreme case of this is when the target of exporting is unavailable).

Typical production run will need to have this processor included in every pipeline
as the first processor (it needs to be the first to be effective).

@pjanotti
Copy link
Contributor

pjanotti commented Jan 13, 2020

Typical production run will need to have this processor included in every pipeline immediately after the batch processor.

For it to be effective it needs to be right after the receivers, ie.: first processor on the pipeline.

@tigrannajaryan
Copy link
Member Author

For it to be effective it needs to be right after the receivers, ie.: first processor on the pipeline.

Right, I keep forgetting that batch is also async.

@codecov-io
Copy link

codecov-io commented Jan 13, 2020

Codecov Report

Merging #498 into master will increase coverage by 0.22%.
The diff coverage is 89.78%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #498      +/-   ##
==========================================
+ Coverage   75.73%   75.96%   +0.22%     
==========================================
  Files         119      122       +3     
  Lines        7414     7551     +137     
==========================================
+ Hits         5615     5736     +121     
- Misses       1530     1543      +13     
- Partials      269      272       +3
Impacted Files Coverage Δ
processor/memorylimiter/metrics.go 100% <100%> (ø)
defaults/defaults.go 81.81% <100%> (+0.42%) ⬆️
processor/memorylimiter/memorylimiter.go 84.41% <84.41%> (ø)
processor/memorylimiter/factory.go 92% <92%> (ø)
service/builder/exporters_builder.go 70.58% <0%> (-2.36%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ec4ad0c...c2ed0c7. Read the comment docs.

Copy link
Contributor

@pjanotti pjanotti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM, just one comment about the "recommended" frequency for the check.

processor/memorylimiter/testdata/config.yaml Show resolved Hide resolved
Copy link
Contributor

@pjanotti pjanotti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

This adds a processor that drops data according to configured memory limits.
The processor is important for high load situations when receiving rate exceeds exporting
rate (and an extreme case of this is when the target of exporting is unavailable).

Typical production run will need to have this processor included in every pipeline
immediately after the batch processor.
@tigrannajaryan tigrannajaryan merged commit 21a70d6 into open-telemetry:master Jan 14, 2020
@tigrannajaryan tigrannajaryan deleted the feature/tigran/memory-limit branch January 14, 2020 18:20
hughesjj pushed a commit to hughesjj/opentelemetry-collector that referenced this pull request Apr 27, 2023
Troels51 pushed a commit to Troels51/opentelemetry-collector that referenced this pull request Jul 5, 2024
swiatekm pushed a commit to swiatekm/opentelemetry-collector that referenced this pull request Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants