Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Possibility to add services for custom exporters #931

Closed
devops-42 opened this issue Jun 15, 2022 · 15 comments
Closed

[Enhancement] Possibility to add services for custom exporters #931

devops-42 opened this issue Jun 15, 2022 · 15 comments
Labels
area:collector Issues for deploying collector enhancement New feature or request

Comments

@devops-42
Copy link

Hi,

with the deployment of the CRD OpenTelemetryCollector three services will deployed by default:

  • <APP>-collector
  • <APP>-collector-headless
  • <APP>-collector-monitoring

The documentation is stating, that

[...] the Operator does examine the configuration file to discover configured receivers and their ports. 
If it finds receivers with ports, it creates a pair of kubernetes services, one headless, exposing those 
ports within the cluster. [...]

Is this behaviour possible for exporters too? My setup has a prometheus exporter configured listening on port 8889:

      prometheus:
        endpoint: ":8889"

To access this service, e.g., from a remote Prometheus instance, I need to create a service manually, i.e. this service is not managed by the operator and may get changed or destroyed without reconciliation.

Question: Is it possible to create additional services by the operator when using exporters which have ports bound on the collector container? This issue might be related to #898

Thanks for your feedback in advance!

Cheers

@pavolloffay pavolloffay added the area:collector Issues for deploying collector label Jun 20, 2022
@pavolloffay
Copy link
Member

Does it have to be an additional service or can the operator open the Prometheus exporter port on the existing collector service?

@devops-42
Copy link
Author

Hi @pavolloffay,

thanks for your response!

There's no real need for an additional service. If the existing collector service would expose the Prometheus exporter it'd be fine :)

Cheers

@pavolloffay pavolloffay added the enhancement New feature or request label Jun 21, 2022
@kevinearls
Copy link
Member

Hi @devops-42 I'm going to take a look at this. Can you send me the (possibly simplified) CR that you are using? It would be helpful as I'm not really that familiar with using the prometheus exporter.

@devops-42
Copy link
Author

HI @kevinearls,

here we go:

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: opentelemetry-collector
spec:
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
          
      # Collect own metrics
      prometheus:
        config:
          scrape_configs:
          - job_name: 'opentelemetry-collector'
            scrape_interval: 10s
            static_configs:
            - targets: ['0.0.0.0:8888']
        
    processors:
      batch:
    
    exporters:
      prometheus:
        endpoint: ":8889"

      logging:
        logLevel: debug
      
      [...]      
        
    service:
      pipelines:
        traces:
          [...]
        
        metrics:
          receivers: [otlp, prometheus]
          processors: [batch]
          exporters: [prometheus]
  
        logs:
          [...] 

    extensions: 
      health_check:

I omitted the parts for tracing and logging. In this setup I received metrics via OLTP. The collector itself provides two endpoints:

  • Port 8888: Provides metrics from the OpenTelemetry collector itself
  • Port 8889: Provides Prometheus endpoint for metrics received via OLTP

Hope, that helps!
Cheers

@kevinearls
Copy link
Member

@devops-42 Thanks!

@lakamsani
Copy link

lakamsani commented Jan 29, 2023

+1 on this.

@kevinearls just wondering if you had any chance to look into this or need any help. i am familiar with go and OTel, have a local OTel collector dev env but not the operator dev env yet. I need this for an internal o11y project where we have to send metrics received into the Otel collector to a Prometheus backend, seems like a fairly common requirement. We can create a prom ServiceMonitor if the collector deployment created via the operator is able to expose the Prometheus exporter port.

The OTel sample demo project has a similar example where a prometheus exporter port 9464 is accessible on the collector. Because this cannot yet be done via the OTel operator they are doing a non-operator deployment for this. See this.

https://github.com/open-telemetry/opentelemetry-helm-charts/blob/main/charts/opentelemetry-demo/examples/default/rendered/opentelemetry-collector/service.yaml#L42

The TOBS project works around this by using Prometheus push but that is not always possible. See:

https://github.com/timescale/tobs/blob/main/chart/values.yaml#L426

@kevinearls
Copy link
Member

Hi @lakamsani I am no longer working on the OTEL operator, so feel free to go ahead and work on this.

@lakamsani
Copy link

@VineethReddy02 FYI ☝️if you have any thoughts on this in terms of level of effort.

@VineethReddy02
Copy link
Contributor

@lakamsani this isn't an L effort. I have contributed in the past to expose receiver ports based on the OtelCollector config. Here is the parser directory exposing receiver ports. Adding something similar for exporters should do the job.

@stillya
Copy link

stillya commented May 12, 2023

Any update? It would be great if we could expose the Prometheus exporter port through the operator. Currently I'm unable to configure SPM in Jaeger using the spanmetrics connector with the operator, and the deprecated spanmetrics processor isn't working well. I'd appreciate any updates on this matter.

@lakamsani
Copy link

lakamsani commented May 12, 2023

@stillya you could try the approach proposed in MoizBhayani#1 . We can confirm it works in our environment in as far making the port available for use for scraping by prometheus via the exporter. You will have to build your own operator locally with those changes.

However we did run into an issue with the exporter code that converts from OTel metrics to Prometheus based on a review of error logs. We opened an issue for that open-telemetry/opentelemetry-collector-contrib#20885

On the MoizBhayani#1 PR we got improvement feedback via email from @jaronoff97 , quoting:

There are two options with the existing logic:
You could create a new Kubernetes service that targets the collector's exposed prometheus port, this would not be in the collector's CRD but rather a standalone service that would be deployed with your collector CRD
OR
Set the .Spec.Ports field as specified here which will expose the port on the created collector service
If you want the prometheus exporter's port added to the generated service automatically, that PR is a good start – but will require logic to actually detect the port set, as well as tests verifying your logic.

We have not had a chance to try either enhancement yet. I don't believe either enhancement is related to or will fix the format conversion error open-telemetry/opentelemetry-collector-contrib#20885 . On that issue our plan is to try with the latest release of OTel SDKs and collectors as announced here

@pavolloffay
Copy link
Member

For people looking at a workaround. The prometheus exporter port can be exposed explicitly in the collector CR e.g.

  mode: deployment
  ports:
    - name: promexporter
      port: 8889
      protocol: TCP

@lakamsani
Copy link

Hi @pavolloffay that is great to know. Any specific version of the OTel operator and the collector it pulls via the CR that we need for this to work.

With reference to operator releases listed here
and the collector contrib releases listed here

We are currently using 0.72.0 of the operator with the 0.74.0 of the collector-contrib. Is your suggestion supposed to work with those version or should we try a different/newer contribution?

As of this writing seems like 0.76.1 is the latest operator and 0.77.0 is the latest collectro-contrib.

@iblancasa
Copy link
Contributor

Related #1689

@iblancasa
Copy link
Contributor

iblancasa commented Aug 22, 2024

I think we can close this issue now because the ports are exposed in the `Service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:collector Issues for deploying collector enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants