Skip to content

Commit

Permalink
cloudv2: Downscale the batch size
Browse files Browse the repository at this point in the history
The backend has the limit set to 1k so it doesn't make sense to use a
bigger value forcing the system to split a single batch in multiple
jobs.
  • Loading branch information
codebien committed Jul 13, 2023
1 parent ea3a23d commit c43ba7f
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion cloudapi/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,9 +175,15 @@ func NewConfig() Config {
TracesPushConcurrency: null.NewInt(1, false),

MaxMetricSamplesPerPackage: null.NewInt(100000, false),
MaxTimeSeriesInBatch: null.NewInt(10000, false),
Timeout: types.NewNullDuration(1*time.Minute, false),
APIVersion: null.NewInt(1, false),

// The backend is setting the same value.
// Setting an higher value could impact the backend performance,
// because it would force it to split into more batches generating stress for the system.
// An eventual change should be discussed with them before.
MaxTimeSeriesInBatch: null.NewInt(1000, false),

// Aggregation is disabled by default, since AggregationPeriod has no default value
// but if it's enabled manually or from the cloud service, those are the default values it will use:
AggregationCalcInterval: types.NewNullDuration(3*time.Second, false),
Expand Down

0 comments on commit c43ba7f

Please sign in to comment.