You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a kafka input, and influx outout. I want to ensure that there is no data loss. Lets say influx config was misconfigured then all the kafka data that telegraf was sending would not be saved in influx. When I fixed the issue and restarted telegraf, kafka input would not send those data anymore. So it looks like there is data loss.
Is this expected behaviour? Is there any settings to make it not ignore data that wasn't handled properly by the output plugin?
The text was updated successfully, but these errors were encountered:
This almost sounds like #3984, but telegraf doesn't cache metrics locally, so even if that issue were resolved, as soon as you restart telegraf, current metrics collected/in queue would be lost. At least as far as I can tell.
That is correct, because we ack too soon data is removed from the upstream queue, and any metrics that are "in flight" will be lost if Telegraf is restarted. It is also possible that metrics consumed from the queue will be dropped from the metric buffer if they are replaced by newer metrics.
Please keep an eye on #3984 for updates about this issue.
I have a kafka input, and influx outout. I want to ensure that there is no data loss. Lets say influx config was misconfigured then all the kafka data that telegraf was sending would not be saved in influx. When I fixed the issue and restarted telegraf, kafka input would not send those data anymore. So it looks like there is data loss.
Is this expected behaviour? Is there any settings to make it not ignore data that wasn't handled properly by the output plugin?
The text was updated successfully, but these errors were encountered: