You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
This issue is created as a result of discussion in #31074.
The Stanza adapter's LogEmitter has a 100-log buffer that is a source of data loss during non-graceful collector shutdown. One solution is to remove this buffer, but this will likely come with a performance impact. This performance could be alleviated in case of Filelog receiver by implementing batching earlier in the Stanza pipeline - in the File consumer.
Consider if it's feasible to modify the File consumer to send logs in batches. It seems natural (to me) for the File consumer to emit all logs read from one file in one pass as a single batch. The file consumer being able to emit batches of entries as opposed to single entries could alleviate the performance impact of removing batching from LogEmitter (possibly even improving the situation, as the "natural" batches could be larger than the current artificial 100 log limit). This would require the modification of all Stanza operators to be able to operate on batches of Stanza entries as opposed to single entry at a time. It seems a simple change to add a ProcessBatch method beside the existing Process method, with a for loop calling Process inside the ProcessBatch function.
In my opinion this could be a really nice improvement regardless of any other changes. The data coming out of the receiver will be more organized and intuitive. In terms of implementation, I think we can add a []string buffer to the Reader struct, and then emit the entire buffer, either when it reaches a max size (maybe configurable at a later time) or whenever we hit EOF. We can maintain a tentative offset which is saved only when this buffer is actually emitted.
Describe alternatives you've considered
Remove buffering in LogEmitter and live with the performance impact it brings.
Instead of removing the buffering, have it persisted so that the data is not lost during non-graceful shutdown.
Change the logic of marking logs as sent, so that logs are only marked as sent when they are actually successfully sent out to next consumer in the collector pipeline.
Component(s)
pkg/stanza/fileconsumer
Is your feature request related to a problem? Please describe.
This issue is created as a result of discussion in #31074.
The Stanza adapter's LogEmitter has a 100-log buffer that is a source of data loss during non-graceful collector shutdown. One solution is to remove this buffer, but this will likely come with a performance impact. This performance could be alleviated in case of Filelog receiver by implementing batching earlier in the Stanza pipeline - in the File consumer.
Describe the solution you'd like
From #31074 (comment):
and further in #31074 (comment):
Describe alternatives you've considered
Additional context
See #31074 (comment) and the following comments.
The text was updated successfully, but these errors were encountered: