Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Propogate name and transport for prometheus receiver and exporter #2680

Conversation

yeya24
Copy link
Contributor

@yeya24 yeya24 commented Mar 11, 2021

Signed-off-by: yeya24 yb532204897@gmail.com

Description:
Fix the issue #2533 which the prometheus receiver and exporter names are not correctly propagated via the context.

Link to tracking Issue: #2533

Testing: < Describe what testing was performed and which tests were added.>
Only tested manually with the config below:

receivers:
  otlp:
    protocols:
      http:
  prometheus:
    config:
      scrape_configs:
        - job_name: "otel-collector"
          scrape_interval: 5s
          static_configs:
            - targets: ["localhost:8888"]
processors:
  batch:
extensions:
  health_check: {}
exporters:
  prometheusremotewrite:
    endpoint: "http://localhost:9090/api/v1/write"
    insecure: true
    external_labels:
      server: otel
  prometheus:
    endpoint: "0.0.0.0:8081"
    namespace: test-space
    const_labels:
      label1: value1
    send_timestamps: true
    metric_expiration: 180m
  logging:
    loglevel: debug
service:
  extensions: [health_check]
  pipelines:
    metrics:
      receivers: [otlp, prometheus]
      processors: [batch]
      exporters: [prometheusremotewrite, prometheus]

The metrics I can get from localhost:8888/metrics are

Screenshot from 2021-03-11 18-36-59

Documentation: < Describe the documentation added.>

Please delete paragraphs that you did not use before submitting.

@yeya24 yeya24 requested a review from a team March 11, 2021 23:37
@codecov
Copy link

codecov bot commented Mar 11, 2021

Codecov Report

Merging #2680 (aed5e10) into main (b13bc9d) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #2680   +/-   ##
=======================================
  Coverage   91.73%   91.73%           
=======================================
  Files         290      290           
  Lines       15614    15615    +1     
=======================================
+ Hits        14324    14325    +1     
  Misses        892      892           
  Partials      398      398           
Impacted Files Coverage Δ
receiver/prometheusreceiver/metrics_receiver.go 69.69% <100.00%> (+0.94%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b13bc9d...aed5e10. Read the comment docs.

@bogdandrutu
Copy link
Member

@yeya24 thanks a lot, while reviewing this PR, I realize that our interface is error prone and went ahead and create #2682 which will remove the problems.

  1. The fact that you need to call ExporterContext should not be the case.
  2. You (and probably most will do the same) did not capture the context returned by startop which should be passed to endop.

I hope that PR solves your issue as well as improves usability. Can you check if that solves your issue?

@yeya24
Copy link
Contributor Author

yeya24 commented Mar 12, 2021

@bogdandrutu Thanks. Looks like your pr is a better way to solve that. The original interface is a little bit confusing.

You (and probably most will do the same) did not capture the context returned by startop which should be passed to endop.

Oops. I didn't even notice it returns a ctx.

@yeya24 yeya24 closed this Mar 12, 2021
@bogdandrutu
Copy link
Member

@yeya24 I think we need to re-open this PR, since my PR only fixes the exporter and not the receiver part.

@bogdandrutu bogdandrutu reopened this Mar 12, 2021
@bogdandrutu
Copy link
Member

Can you remove the exporter fix and leave only the receiver part?

@yeya24
Copy link
Contributor Author

yeya24 commented Mar 12, 2021

Can you remove the exporter fix and leave only the receiver part?

Hi, I am happy to do that. Is the current receiver change okay or do I need to change anything?

One thing I am not very sure about is that I created a receiverCtx for the ocaStore. However, in each transaction, it calls https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/prometheusreceiver/internal/transaction.go#L150. Is it better to put ReceiverCtx inside obsreport.StartMetricsReceiveOp?

@yeya24 yeya24 force-pushed the fix-empty-receiver-exporter-name-prometheus branch from fdee413 to a4ed41f Compare March 13, 2021 02:01
@yeya24
Copy link
Contributor Author

yeya24 commented Mar 16, 2021

@bogdandrutu Hello, can you please take a look at this pr? Anything else I need to do to fix the failing test?

@bogdandrutu
Copy link
Member

To fix the test just rerun that test

Signed-off-by: yeya24 <yb532204897@gmail.com>
@yeya24 yeya24 force-pushed the fix-empty-receiver-exporter-name-prometheus branch from a4ed41f to aed5e10 Compare March 16, 2021 20:39
@yeya24
Copy link
Contributor Author

yeya24 commented Mar 16, 2021

@bogdandrutu Can you please take another review?
I have updated my pr and the failed test seems not related to my change.

@@ -73,7 +76,8 @@ func (r *pReceiver) Start(ctx context.Context, host component.Host) error {
if !r.cfg.UseStartTimeMetric {
jobsMap = internal.NewJobsMap(2 * time.Minute)
}
ocaStore := internal.NewOcaStore(ctx, r.consumer, r.logger, jobsMap, r.cfg.UseStartTimeMetric, r.cfg.StartTimeMetricRegex, r.cfg.Name())
receiverCtx := obsreport.ReceiverContext(ctx, r.cfg.Name(), transport)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to start from context.Background() because the context from Start should not be stored and used later (not you bug, but please fix that as well).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I was going to update this but it was merged. Do you need me to open another pr?

@bogdandrutu bogdandrutu merged commit 1afe267 into open-telemetry:main Mar 19, 2021
@yeya24 yeya24 deleted the fix-empty-receiver-exporter-name-prometheus branch January 1, 2022 21:21
hughesjj pushed a commit to hughesjj/opentelemetry-collector that referenced this pull request Apr 27, 2023
…y#2680)

Bumps [boto3](https://github.com/boto/boto3) from 1.26.84 to 1.26.89.
- [Release notes](https://github.com/boto/boto3/releases)
- [Changelog](https://github.com/boto/boto3/blob/develop/CHANGELOG.rst)
- [Commits](boto/boto3@1.26.84...1.26.89)

---
updated-dependencies:
- dependency-name: boto3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants