-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheusreceiver: Unable to relabel job and instance labels using relabel_configs and metric_relabel_configs in prometheus receiver #5663
Comments
Upon some investigation, I see that the code here(
is trying to do a lookup of targets based on the newly relabelled values -
|
@dashpole - As discussed in the sig this morning, verified that this works in prometheus with the same relabel_configs. |
I meet the same problem.Is someone working on this? |
When you tried with prometheus, did you see metadata (e.g. description) for the relabeled metric? It might be that prometheus allows relabeling job+instance, but you lose metadata when doing so. The fix might be something similar to what we decided in #5001 (comment), where we should be allowing users to relabel these labels, but should pass them on without metadata. |
@dashpole - Sorry for the delay. I tried it with prometheus and I see that the metadata seems to be available when i tried a few targets. I queried the target metadata api to view the metadata. Was that what you were looking for? |
Thats really interesting. In theory, we should be querying the target metadata when we go check for metadata. It probably needs more digging |
metric relabel configs + jobWith prometheus config:scrape_configs:
- job_name: 'metric-relabel-config'
scrape_interval: 10s
static_configs:
- targets: ['0.0.0.0:8888'] #self obs for collector/prometheus
metric_relabel_configs:
- source_labels: [__address__]
replacement: job_replacement
target_label: job
... The collector outputs:
The prometheus server showsFrom (/api/v1/targets/metadata) it the metric with the old metadata: {"target":{"instance":"0.0.0.0:9090","job":"metric-relabel-config"},"metric":"go_memstats_lookups_total","type":"counter","help":"Total number of pointer lookups.","unit":""} But in the query UI, I can see:
metric relabel configs + instanceWith prometheus config:scrape_configs:
- job_name: 'metric-relabel-config'
scrape_interval: 10s
static_configs:
- targets: ['0.0.0.0:8888'] #self obs for collector/prometheus
metric_relabel_configs:
- source_labels: [__address__]
replacement: instance_replacement
target_label: instance
... The collector outputs:
The prometheus server showsFrom (/api/v1/targets/metadata) it the metric with the old metadata: {"target":{"instance":"0.0.0.0:9090","job":"metric-relabel-config"},"metric":"go_memstats_lookups_total","type":"counter","help":"Total number of pointer lookups.","unit":""} I didn't see the new metadata. But in the query UI, I can see:
So if you tried to look for metric metadata with prometheus + grafana, the metric would be treated as Unknown. With relabel_configs + job:scrape_configs:
- job_name: 'relabel-config'
scrape_interval: 10s
static_configs:
- targets: ['0.0.0.0:8888'] #self obs for collector/prometheus
relabel_configs:
- source_labels: [__address__]
replacement: job_replacement
target_label: job
... The collector outputs:
The prometheus server showsFrom (/api/v1/targets/metadata) it the metric with the old metadata: {"target":{"instance":"0.0.0.0:9090","job":"relabel-config"},"metric":"go_memstats_lookups_total","type":"counter","help":"Total number of pointer lookups.","unit":""} But in the query UI, I can see:
TL;DRThe collector's behavior is always to drop points for which we are unable to find target metadata. The prometheus server's behavior is to treat them the same as unknown-typed metrics without metadata. The fix should be similar to #5001 (comment). I'm not sure why our results differed looking at metadata, though. What endpoint were you querying? |
* Use target and metadata from context This fixes #5757 and #5663 Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Add tests for relabeling working Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Use Prometheus main branch prometheus/prometheus#10473 has been merged Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Add back the tests Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Fix flaky test Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Add Changelog entry Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Add relabel test with the e2e framework Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> * Update receiver/prometheusreceiver/metrics_receiver_labels_test.go Co-authored-by: Anthony Mirabella <a9@aneurysm9.com> * Move changelog entry to unreleased Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de> * Make lint pass Needed to run make gotidy; make golint strings.Title is deprecated Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com> Co-authored-by: Anthony Mirabella <a9@aneurysm9.com> Co-authored-by: Juraci Paixão Kröhling <juraci@kroehling.de>
@dashpole I believe this can be closed now. |
Describe the bug
When using relabel_config to relabel job or using metric_relabel_configs to relabel job/instance labels, the collector fails to scrape metrics
Steps to reproduce
Provide target_label as job in the relabel_config or job/instance in metric_relabel_configs.
What did you expect to see?
Job/instance label should be replaced with the desired replacement value
What did you see instead?
When using relabel_config with target_label job, saw this error -
{"level":"warn","ts":1633406450.405102,"caller":"scrape/scrape.go:1104", "msg":"Appending scrape report failed","kind":"receiver","name":"prometheus","scrape_pool":"prometheus_ref_app", "target":"http://10.244.1.76:2112/metrics","err":"unable to find a target group with job=job_replacement"}
When using metric_relabel_config saw this error -
{"level":"debug","ts":1633485477.928548,"caller":"scrape/scrape.go:1355","msg":"Unexpected error","kind":"receiver",
# "name":"prometheus","scrape_pool":"prometheus_ref_app","target":"http://10.244.1.76:2112/metrics",
# "series":"go_gc_duration_seconds{quantile="0"}","err":"unable to find a target with job=prometheus_ref_app,
# and instance=instance_replacement"}
What version did you use?
Version: v0.27.0
What config did you use?
Config: (e.g. the yaml config file)
Environment
OS:"Ubuntu 20.04"
Compiler(if manually compiled): go 1.14
The text was updated successfully, but these errors were encountered: