-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[receiver/prometheusoperator] Add base structure #6344
[receiver/prometheusoperator] Add base structure #6344
Conversation
I like the idea of being able to reuse the CRDs from the Prometheus operator, but I'd like to see some more detail about how this receiver would function and how it would integrate with the existing Prometheus receiver. Can you prepare a design document? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@open-telemetry/collector-contrib-approvers (or somebody else) Is someone with Prometheus knowledge able to review this?
|
||
import ( | ||
"go.opentelemetry.io/collector/config" | ||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an unstable dependency; its structs should not be part of the public API of the receiver (otherwise we may not be able to use the latest k8s.io
version)
@Aneurysm9 Do you have example of such a document for OpenTelemetry or what is the preferred format for this? |
I don't think we need anything fancy. A |
Agree. We have multiple pieces of Prometheus support scattered around, including in the operator (like the target allocator feature, of which @Aneurysm9 is the maintainer). It would be great to regroup and define what the ideal solution would look like for different use cases and components. cc @alolita, as you probably also have an interest in the target allocator (and around Prometheus in general). |
The target allocation capability in the operator is precisely why this piqued my interest. I had the configuration generation capability of the Prometheus operator exposed so that it could be incorporated there and I'd hope that the approach we take to include it directly in the collector can also be reused and integrated there. |
@Aneurysm9 I added a first synopsis, of what my initial thoughts have been. Should there be still interest, I can further develop the document. |
cc @dashpole |
@jpkrohling thx! Taking a look at this proposal. |
0a01f95
to
dbe658c
Compare
@secustor, could you add a "when to use what" section? For people getting started with OpenTelemetry, it might be confusing to understand all the available components (this, other Prometheus receivers, otel-operator, ...) and when to use them. |
This reverts commit 70c53a42ded66edac9ce9572890905c62398f632.
dbe658c
to
cd6f61c
Compare
@jpkrohling I have added such a section to the README of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only reviewed the design.
What is the plan for managing TLS certs? The prometheus operator adds these to prometheus as a volume, but we need a plan for managing these ourselves.
Instead of writing it onto a disk the configuration is unmarshalled using the Prometheus config loader, | ||
which is already in use by the `prometheusreceiver` resulting in a full Prometheus config. | ||
|
||
In case of an [Agent](#Collector vs Agent deployment) deployment, which is signaled with the `limit_to_node` option, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we just allow specifying filters for PodMonitor/ServiceMonitor like you can do for prometheus? Seems like something we would want eventually, and would cover this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had 3 limitation options in mind:
- namespace(s) which are to be watched for monitor objects
- a label selector to limit monitor objects in these namespaces( as it is setup in the Prometheus CRD of PrometheusOperator )
- the node limiter option, which is used for an agent style deployment.
The first two are currently provided by PrometheusOperator ConfigGenerator
package and the later one is implemented using the additional relabel configs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just suggesting that we re-use the underlying prometheus namespaces and selector structure. It allows specifying the role (podmonitor or servicemonitor in this case), a label selector, and a field selector. The field selector would allow limiting to podmonitors or servicemonitors on the same node, but is more general than your proposed node limiter option. Because of the "role" field, it would also allow supporting only podmonitors, or only servicemonitors, and allows different label or field selectors for each. Re-using the prometheus server's structure for this config would make it familiar to those already familiar with kubernetes_sd_configs.
which is already in use by the `prometheusreceiver` resulting in a full Prometheus config. | ||
|
||
In case of an [Agent](#Collector vs Agent deployment) deployment, which is signaled with the `limit_to_node` option, | ||
only local endpoints will be fetched, endpoints should be filtered so that only pods are scraped which are scheduled |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a note, it is not recommended to watch endpoints (or endpointslices) from each node. The apiserver has a watch index for pods by node name, meaning it is acceptable to watch pods assigned to each node from a daemonset, but does not have the same for endpoints.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, but I'm not sure how to solve this.
The only option other than introducing a new index in Kubernetes I see is the introduction of a shared cache. This could be maybe done as extension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just noting it; I don't think it is easily solvable. I think we should not recommend using a daemonset with servicemonitors to users because of this.
### Collector | ||
If running as collector the Prometheus config provided by PrometheusOperator can be reused without a change. | ||
Should multiple instances with the same config run in the same cluster, they will act like a | ||
high availability pair of Prometheus. Therefore, all targets will be scraped multiple times and telemetry |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HA is nice, but this means the collector can't shard work at all, and can't scale up replicas to reduce load. Did you consider supporting sharding with the hashmod action, like the prometheus operator does?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't been aware of this till now. This is definitely a useful addition when the receiver is setup as collector!
I will work this into the proposal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other thing to consider is the OpenTelemetry Operator's prometheus target allocation capability. It is designed to allow running multiple collector instances and distributing targets across them. It will re-allocate targets if a collector instance is added or removed. I think adding the ability to utilize the pod and service monitors there should be considered as an alternative to building this into a receiver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Aneurysm9 do you have a link to a design document for the target allocator? I couldn't find any in the OpentelemetryOperator repo or on opentelemetry.io.
If the community prefers to implement this first in the target allocator, I will work on that instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's the initial outline of the capability and here's the more detailed design doc for the target allocator service.
The target allocation server is set up to reload its server when the config file it uses changes. It should be feasible to add a watch for the Prometheus CRDs and use them to update the config, which will then cause the allocator to start using the generated SD configs.
In every other case other Prometheus receivers should be used. | ||
Below you can find a short description of the available options. | ||
|
||
### Prometheus scrape annotations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you even need to use the receivercreator in this case. You can just use the __meta_kubernetes_pod_annotation_prometheus_io_scrape label to filter pods (use the equivalent for endpoints) directly in the prometheusreceiver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should work as you describe it. I agree that the prometheusreceiver
is in that case preferable.
I will adapt here the section too.
It provides a simplified interface around the `prometheusreceiver`. Use cases could be the federation of Prometheus | ||
instances or scraping of targets outside dynamic setups. | ||
|
||
### Prometheus service discovery and manual configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just documenting some investigation i've done in the past: Why not just implement PodMonitor and ServiceMonitor using the prometheus service discovery? That would have the benefit of not needing to shutdown and restart the prometheus receiver when a PodMonitor or ServiceMonitor is modified.
Answer: The prometheus service discovery interface only supports adding new targets, but doesn't support manipulating metrics after they are scraped. So we wouldn't be able to support metricRelabelConfigs with that approach.
This PR was marked stale due to lack of activity. It will be closed in 7 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
Is this something that will be worked on in the future? Or should I stop removing the |
I think this is still proposal is still valid. The current focus is to implement the support for this, as described in this comment, in the TargetAllocator. Is there a way to remove this PR from the lifecycle? |
Not that I know of, other than closing it and re-opening when it can be worked on again |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
Closed as inactive. Feel free to reopen if this PR is still being worked on. |
Description:
Adding structure for a new receiver based on the
prometheus
receiver.Target is the support of a subset of PrometheusOperator CRDs as configuration option. This should give the user the option to gather metrics from targets defined by CRDs such as the
ServiceMontitor
orPodMonitor
. These are provided mostly by applications themselves.Link to tracking Issue: #6345
Testing: Only standard config parsing test ATM as this PR only contains the structure
Documentation: added README which describes the status and options of this receiver