You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the exporter is connected to a big cluster each call to the RefreshMetadata() method is expensive because Kafka needs to pull data from the entire cluster. Since the metadata info doesn't change very often it would be helpful to be able to control this with a refresh interval.
This means that we could keep a low scrape interval on the Prometheus side for the lag/offsets and just query the metadata every once in a while. The specific configuration would be dependant on how often a new topic/partition is added/changed in the cluster.
We could use a custom flag like --refresh.metadata with a default value of 30s. Still, the metrics would be accurate because we don't cache the lag/offsets, only the metadata.
The text was updated successfully, but these errors were encountered:
When the exporter is connected to a big cluster each call to the
RefreshMetadata()
method is expensive because Kafka needs to pull data from the entire cluster. Since the metadata info doesn't change very often it would be helpful to be able to control this with a refresh interval.This means that we could keep a low scrape interval on the Prometheus side for the lag/offsets and just query the metadata every once in a while. The specific configuration would be dependant on how often a new topic/partition is added/changed in the cluster.
We could use a custom flag like
--refresh.metadata
with a default value of 30s. Still, the metrics would be accurate because we don't cache the lag/offsets, only the metadata.The text was updated successfully, but these errors were encountered: