-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose /metrics an a separate port #8060
Comments
@fabxc would etcd get an allocation on https://github.com/prometheus/prometheus/wiki/Default-port-allocations or is that something different? |
That would generally make sense @heyitsanthony. Next one up is usually marked. It is |
TBH, I'm not too much of a fan of dedicated Prometheus ports beyond our
core components. It's just not practical to maintain those global
registries.
As an etcd user, I'd probably prefer the default metric port to be default
API port + 1 or similar, so it's easier to remember.
…On Wed, Jun 14, 2017 at 8:42 AM Frederic Branczyk ***@***.***> wrote:
That would generally make sense @heyitsanthony
<https://github.com/heyitsanthony>. Next one up is usually marked. It is
9267 right now. This should of course be configurable if a user desires
to bind it differently.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8060 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEuA8taijhK9wdzVS9arASBPxHMxi2rjks5sD4DngaJpZM4N0ABL>
.
|
FWIW certainly not all exporters and applications oblige to it, so etcd being a relatively prominent application I think it's valid to choose whatever we like best. The wiki page is also open and I don't think anyone monitors the changes much, so the reliability is questionable either way (I've brought this up before but it was decided to keep as guidance for now). |
do you think this is a reasonable enhancement we could do in 3.3? |
@xiang90 @heyitsanthony Do we want to move |
Duplicate the handler. There could be clients that have access to the internal port that expect /metrics to be available. |
Awesome! Thanks for the collaboration everyone! |
etcd v3.3 introduced a new flag to allow serving `/metrics` and `/health` under a different port than e.g. `/v2/keys`. This allows us to protect etcd's data via firewall rules but still let monitoring tools to access the monitoring information. See feature request in etcd repo: etcd-io/etcd#8060. The implementation landed in v3.3: etcd-io/etcd#8242 This PR instructs etcd to serve metrics and health under the additonal port `2381` *unconditionally* **when the used etcd binary is** `>=v3.3.x`. However, if not explicitely set in the `senza.yaml` this port won't be mapped to the outside and therefore isn't accessible. It doesn't expose more information than anything under `2379` already does.
Is it correct that even if one activates additional URLs with distinct ports for etcd serving metrics (per #8242), the server still demands the same authentication credentials as the other etcd ports (2379, 2380)? It is challenging to arrange for Prometheus to have the client key and certificate available to authenticate to etcd—as mentioned in kubernetes/kubernetes#53405. Running a dedicated Prometheus server on each Kubernetes master machine would allow mounting the client key and certificate files from the host—permissions willing. An alternative is to place those files into a Secret object, though creating such a Secret poses challenges too. |
You can specify HTTP URL, bypassing TLS auth. |
Thank you, @gyuho. I see that accommodation here in |
In many scenarios etcd nodes run on designated nodes that are very restrictive for outside traffic.
If we want to monitor etcd, we don't need access to any critical APIs but just
/metrics
. It's not generally feasible to run Prometheus on the same nodes as etcd though. Having etcd expose/metrics
on a designated port that can be exposed relatively safely would solve those scenarios.It could be an optional flag,that exposes an additonal
/metrics
endpoint on another address if set.@xiang90 @brancz
The text was updated successfully, but these errors were encountered: