Skip to content

Commit

Permalink
cloudv2: Downscale the batch size
Browse files Browse the repository at this point in the history
The backend has the limit set to 1k so it doesn't make sense to use a
bigger value forcing the system to split a single batch in multiple
jobs.
  • Loading branch information
codebien committed Jul 14, 2023
1 parent 1278f6e commit 5f5e374
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion cloudapi/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,9 +175,13 @@ func NewConfig() Config {
TracesPushConcurrency: null.NewInt(1, false),

MaxMetricSamplesPerPackage: null.NewInt(100000, false),
MaxTimeSeriesInBatch: null.NewInt(10000, false),
Timeout: types.NewNullDuration(1*time.Minute, false),
APIVersion: null.NewInt(1, false),

// The set value (1000) is selected for performance reasons.
// Any change to this value should be first discussed with internal stakeholders.
MaxTimeSeriesInBatch: null.NewInt(1000, false),

// Aggregation is disabled by default, since AggregationPeriod has no default value
// but if it's enabled manually or from the cloud service, those are the default values it will use:
AggregationCalcInterval: types.NewNullDuration(3*time.Second, false),
Expand Down

0 comments on commit 5f5e374

Please sign in to comment.