Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The suggested example configuration contradicts the opentelemetry-collector docs (batch, memory_limiter) #35759

Closed
x-yuri opened this issue Oct 13, 2024 · 3 comments · Fixed by #35768
Assignees
Labels
documentation Improvements or additions to documentation exporter/googlemanagedprometheus Google Managed Prometheus exporter

Comments

@x-yuri
Copy link

x-yuri commented Oct 13, 2024

Component(s)

exporter/googlemanagedprometheus

Describe the issue you're reporting

I'm talking about the following example.

The order in each section below is the best practice. Refer to the individual processor documentation for more information.

  1. memory_limiter
  2. Any sampling or initial filtering processors
  3. Any processor relying on sending source from Context (e.g. k8sattributes)
  4. batch
  5. Any other processors

https://github.com/open-telemetry/opentelemetry-collector/tree/v0.111.0/processor#recommended-processors

For the memory_limiter processor, the best practice is to add it as the first processor in a pipeline. This is to ensure that backpressure can be sent to applicable receivers and minimize the likelihood of dropped data when the memory_limiter gets triggered.

https://github.com/open-telemetry/opentelemetry-collector/blob/v0.111.0/processor/memorylimiterprocessor#best-practices

The batch processor should be defined in the pipeline after the memory_limiter as well as any sampling processors. This is because batching should happen after any data drops such as sampling.

https://github.com/open-telemetry/opentelemetry-collector/blob/v0.111.0/processor/batchprocessor#batch-processor

Also I believe it should be documented why you're renaming the location, cluster, namespace, job, instance, and project_id attributes. Are they produced by k8s? Resource Detection Processor? Or is it just in case something produces them?

@x-yuri x-yuri added the needs triage New item requiring triage label Oct 13, 2024
@github-actions github-actions bot added the exporter/googlemanagedprometheus Google Managed Prometheus exporter label Oct 13, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole dashpole added documentation Improvements or additions to documentation and removed needs triage New item requiring triage labels Oct 14, 2024
@dashpole dashpole self-assigned this Oct 14, 2024
@dashpole
Copy link
Contributor

dashpole commented Oct 14, 2024

Also I believe it should be documented why you're renaming the location, cluster, namespace, job, instance, and project_id attributes.

They can often be produced by things like kube-state-metrics. Even if they aren't, it is a precaution to prevent ingestion errors.

@damemi
Copy link
Contributor

damemi commented Oct 14, 2024

Also I believe it should be documented why you're renaming the location, cluster, namespace, job, instance, and project_id attributes.

They can often be produced by things like kube-state-metrics. Even if they aren't, it is a precaution to prevent ingestion errors.

This is documented in the Resource Attribute Handling section and the official GCP docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation exporter/googlemanagedprometheus Google Managed Prometheus exporter
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants