-
Notifications
You must be signed in to change notification settings - Fork 543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MVP: Cost attribution #10269
base: main
Are you sure you want to change the base?
MVP: Cost attribution #10269
Conversation
5165a5b
to
6f36b5f
Compare
6f36b5f
to
077a94a
Compare
077a94a
to
f04c28f
Compare
@@ -502,6 +525,18 @@ func (s *seriesStripe) remove(ref storage.SeriesRef) { | |||
} | |||
|
|||
s.active-- | |||
if s.cat != nil { | |||
if idx == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, we should assume this isn't nil. Just skipping the removal will break the numbers forever.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vendor and update in commit 4706bde
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the active series tracker tests with the costattribution.Tracker, otherwise the new code isn't tested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addressed in 17b64a9
pkg/ingester/ingester.go
Outdated
idx, err := db.Head().Index() | ||
if err != nil { | ||
level.Warn(i.logger).Log("msg", "failed to get the index of the TSDB head", "user", userID, "err", err) | ||
idx = nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As commented previously, we should never proceed without an index.
If you check the implementation of db.Head().Index()
it never returns an error. We have three options here:
- Skip tenants if they don't have index: this is the least effort one.
- Panic if err is not nil, this is ugly
- Update mimir-prometheus to add a
MustIndex() IndexReader
method that does not return an error, and use that one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR in mimir-prometheus grafana/mimir-prometheus#811
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vendor and update in commit 4706bde
pkg/mimir/modules.go
Outdated
if t.Cfg.CostAttributionRegistryPath != "" { | ||
reg := prometheus.NewRegistry() | ||
var err error | ||
t.CostAttributionManager, err = costattribution.NewManager(3*time.Minute, time.Minute, t.Cfg.CostAttributionEvictionInterval, util_log.Logger, t.Overrides, reg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these values should not be hardcoded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed unused parameter b27e379
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating the docs! I left a few suggestions.
Co-authored-by: Oleg Zaytsev <mail@olegzaytsev.com>
labelValues[idx] = missingValue | ||
} | ||
} | ||
key := t.hashLabelValues(labelValues) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not entirely convinced that fetching a key as a string every time is the best approach, but it has simplified the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It simplifies the code at the cost of one allocation per each series received. Please let's revert to the previous approach where we used a pooled byte slice to perform this lookup.
func (t *Tracker) IncrementReceivedSamples(req *mimirpb.WriteRequest, now time.Time) { | ||
if t == nil { | ||
return | ||
} | ||
|
||
dict := make(map[string]int) | ||
for _, ts := range req.Timeseries { | ||
lvs := t.extractLabelValuesFromLabelAdapater(ts.Labels) | ||
dict[t.hashLabelValues(lvs)] += len(ts.TimeSeries.Samples) + len(ts.TimeSeries.Histograms) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the hottest path on our application, we should optimize it as much as possible.
Why do we need to build a new data structure (which escapes to heap) with holds []mimirpb.LabelAdapter
slices that escape to heap, which create a string that escapes to heap just to put it into a dict
that I don't think should be a map, even if we need some data structure.
Can we just extract the labelValues byte slices, recycled from a pool, and process each one separately in the loop below?
out <- prometheus.MustNewConstMetric(t.activeSeriesPerUserAttribution, prometheus.GaugeValue, t.overflowCounter.activeSerie.Load(), t.overflowLabels[:len(t.overflowLabels)-1]...) | ||
out <- prometheus.MustNewConstMetric(t.receivedSamplesAttribution, prometheus.CounterValue, t.overflowCounter.receivedSample.Load(), t.overflowLabels[:len(t.overflowLabels)-1]...) | ||
out <- prometheus.MustNewConstMetric(t.discardedSampleAttribution, prometheus.CounterValue, t.overflowCounter.totalDiscarded.Load(), t.overflowLabels...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we doing the sub-slicing here to the length of the slice? That sounds like a noop.
if _, exists := t.observed[key]; exists { | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is wrong. Sounds like we should still increment the numbers, right? Otherwise we didn't count this activeSerie
, and when we delete it, we'll go into negative numbers.
|
||
// Aggregate active series from all keys into the overflow counter. | ||
for _, o := range t.observed { | ||
if o != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can o
be nil?
o.lastUpdate.Store(ts) | ||
if activeSeriesIncrement != 0 { | ||
o.activeSerie.Add(activeSeriesIncrement) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We didn't check the overflow here, so we're incremeting something that isn't being used anymore, which means that the overflow number is wrong.
If we want to keep the overflow number correct, we need to handle these race conditions (and I don't think it will be easy).
previousOverflow = t.isOverflow.Swap(true) | ||
if !previousOverflow { | ||
// Initialize the overflow counter. | ||
t.overflowCounter = &observation{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be some kind of concurrency coordination here on setting this property.
if t.isOverflow.Load() { | ||
// if already in overflow mode, update the overflow counter. If it was normal mode, the active series are already applied. | ||
if previousOverflow && activeSeriesIncrement != 0 { | ||
t.overflowCounter.activeSerie.Add(activeSeriesIncrement) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t.overflowCounter
can be nil here.
What this PR does
This is the follow up of #9733,
The PR intent to export extra attributed metrics in distributor and ingester, in order to get sample received, sample discarded and active_series attributed by cost attribution label.
Which issue(s) this PR fixes or relates to
Fixes #
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]
.about-versioning.md
updated with experimental features.