Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Percentage settings for memory limiter processor and ballast size #1078

Closed
pavolloffay opened this issue Jun 4, 2020 · 5 comments · Fixed by #1622
Closed

Percentage settings for memory limiter processor and ballast size #1078

pavolloffay opened this issue Jun 4, 2020 · 5 comments · Fixed by #1622
Labels
enhancement New feature or request

Comments

@pavolloffay
Copy link
Member

Is your feature request related to a problem? Please describe.

Flag --mem-ballast-size-mib and memory linter processor configuration accept absolute settings. This makes it hard to enable and configure these components in a default/hardcoded configuration. The percentage settings could be applied to a wide range of deployments and default values could be used.

--mem-ballast-size-mib=2000

processors:
  memory_limiter:
    ballast_size_mib: 2000
    check_interval: 5s
    limit_mib: 4000
    spike_limit_mib: 500

Describe the solution you'd like

Use percentages to set --mem-ballast-size-mib, ballast_size_mib, limit_mib and spike_limit_mib. The check_interval could be set some optimal default value.

Describe alternatives you've considered

Wrap the memory linter and dynamically calculate absolute values. However, it would be better to get this implemented directly in the core.

Additional context

Jaeger binaries would like to enable memory linter in the default configuration.

@flands
Copy link
Contributor

flands commented Jun 4, 2020

The memory limiter is critical and difficult to enable by default -- agreed something like this is needed and must be in core. Even better if ballast size did not need to be set in two different places.

@tigrannajaryan
Copy link
Member

@pavolloffay Can you clarify: percentage of what? Where do we get the total to calculate the fraction?

@flands flands added this to the GA 1.0 milestone Jun 18, 2020
@flands flands mentioned this issue Jun 18, 2020
@pavolloffay
Copy link
Member Author

The total amount of memory would be derived from the host environment. I haven't looked into this closely, but there should be a way to get total memory of the runtime. For example JDK supports this https://www.eclipse.org/openj9/docs/xxinitialrampercentage/

@morigs
Copy link
Contributor

morigs commented Aug 9, 2020

Is there any progress? Is somebody working on this?

@pavolloffay
Copy link
Member Author

I have started working on this.

@andrewhsu andrewhsu added the enhancement New feature or request label Jan 6, 2021
MovieStoreGuy pushed a commit to atlassian-forks/opentelemetry-collector that referenced this issue Nov 11, 2021
* Update trace export interface

Move to conforming to the specification.

* Update documentation in export trace

* Update sdk trace provider to support new trace exporter

* Update SpanProcessors

Support the Provider changes and new trace exporter.

* Update the SDK to support the changes

* Update trace Provider to not return an error

* Update sdk with new Provider return

Also fix the testExporter ExportSpans method

* Update exporters with changes

* Update examples with changes

* Update Changelog

* Move error handling to end of shutdown

* Update exporter interface

Rename to SpanExporter to match specification. Add an error return value
to the Shutdown method based on feedback. Propagate these changes.

Remove the Stop method from the OTLP exporter to avoid confusion and
redundancy.

* Add test to check OTLP Shutdown honors context

* Add Jaeger exporter test for shutdown

* Fix race in Jaeger test

* Unify shutdown behavior and testing

* Update sdk/trace/simple_span_processor.go

Co-authored-by: Anthony Mirabella <a9@aneurysm9.com>

Co-authored-by: Anthony Mirabella <a9@aneurysm9.com>
Troels51 pushed a commit to Troels51/opentelemetry-collector that referenced this issue Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants