Skip to content
This repository has been archived by the owner on Aug 13, 2019. It is now read-only.

Expose prometheus_tsdb_lowest_timestamp metric #363

Merged
merged 4 commits into from
Sep 14, 2018
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions db.go
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ type dbMetrics struct {
compactionsTriggered prometheus.Counter
cutoffs prometheus.Counter
cutoffsFailed prometheus.Counter
startTime prometheus.GaugeFunc
tombCleanTimer prometheus.Histogram
}

Expand Down Expand Up @@ -157,6 +158,17 @@ func newDBMetrics(db *DB, r prometheus.Registerer) *dbMetrics {
Name: "prometheus_tsdb_retention_cutoffs_failures_total",
Help: "Number of times the database failed to cut off block data from disk.",
})
m.startTime = prometheus.NewGaugeFunc(prometheus.GaugeOpts{
Name: "prometheus_tsdb_start_time_seconds",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prometheus_tsdb_lowest_timestamp makes more sense here, as start_time could be confused with the process start time and it may or may not be seconds :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should keep the unit suffix and enforce the metric's value to be in seconds though.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. From a general TSDB perspective, there is no reason that the time passed into .Append(t int64, v float64), is a wall time. It could be anything, and people may even pass nano-seconds.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. Does it make sense to have an "opaque" metric then? Wouldn't it be better to expose the raw value to the caller (eg Prometheus) and let it compute the metric?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Prometheus appends millisecond timestamps to the TSDB, then this metric as collected might be slightly more difficult to work with using something like the built-in time() function for example. For that reason I agree that it would be nice to adhere to metric naming best practices and include the base unit as a suffix in the metric name, converting the millisecond timestamp to a unix timestamp if necessary. As @simonpasquier mentioned I would think that instrumenting this metric directly in Prometheus instead would allow for this.

With that being said perhaps there are other consumers of the TSDB where exposing this lowest timestamp metric directly would still be useful.

Help: "Oldest timestamp stored in the database.",
}, func() float64 {
db.mtx.RLock()
defer db.mtx.RUnlock()
if len(db.blocks) == 0 {
return float64(db.head.minTime)
}
return float64(db.blocks[0].meta.MinTime)
})
m.tombCleanTimer = prometheus.NewHistogram(prometheus.HistogramOpts{
Name: "prometheus_tsdb_tombstone_cleanup_seconds",
Help: "The time taken to recompact blocks to remove tombstones.",
Expand All @@ -170,6 +182,7 @@ func newDBMetrics(db *DB, r prometheus.Registerer) *dbMetrics {
m.cutoffs,
m.cutoffsFailed,
m.compactionsTriggered,
m.startTime,
m.tombCleanTimer,
)
}
Expand Down