Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporterhelper] Introduce batching functionality #8685
[exporterhelper] Introduce batching functionality #8685
Changes from all commits
8c83fd7
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
Check warning on line 73 in exporter/exporterhelper/batch_sender.go
Codecov / codecov/patch
exporter/exporterhelper/batch_sender.go#L72-L73
Check warning on line 85 in exporter/exporterhelper/batch_sender.go
Codecov / codecov/patch
exporter/exporterhelper/batch_sender.go#L84-L85
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm reading the code and TODO correctly -- it looks like the batch
Export()
error may or may not be returned to the caller depending on whether the export is triggered by a timer or by a full batch. When one caller sees the error, it's because a batch that (may have?) included its own data failed synchronously while trying to send. The error handling looks inaccurate.I'm looking to replicate the functionality in https://github.com/open-telemetry/otel-arrow/blob/main/collector/processor/concurrentbatchprocessor/README.md, which has each caller block until the data they entered into one or more batches has been decisively exported or not. Each caller ends up with different partials depending on whether the data they submitted lands in one or more batches. If the data they submitted ends in exactly one batch, it will not get a partial error in that case, it will get a total error describing the uniform result.
The solution I'm referring to blocks each caller until each caller's data has been processed and returns one of three outcomes to each caller: total error, partial error, and no error. It looks like the solution here may or may not block the caller until their data is processed.
The reason this matters is that I want to have accurate success/failure counts from the producer perspective. If the batch processor does not return accurate error information to the SDKs that produced the data, then the SDK-level metrics become not very useful. It's not only to avoid double publishing, it's to improve SDK-level metrics accuracy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current solution always blocks all the callers until the batch is complete or timed out.
This particular comment is about splitting behavior. If one of the batch splits fails to be sent, we return an error as the whole batch failed. We should return a partial error, letting the caller know what portion failed. I can take care of this TODO in this PR. Addressing it should cover all your needs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW the line above is not related to the TODO. It just means that we send the remainder of a split right away, even if the size of it is smaller than the minimum threshold. It's needed to avoid stacking blocked requests on top of each other if they are not part of one batch.