-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partition HTTP requests to loki by stream for improved throughput #6041
Comments
Discussion in discord with a user that would benefit from this: https://discord.com/channels/742820443487993987/746070591097798688/799285661449322537 |
This would be great :) |
@jszwedko Hello, what can this feature plan to support, the throughput of vector cannot be improved at present |
Hi @wgb1990. This feature hasn't been scheduled yet. |
Noting this came up in discord again: https://discord.com/channels/742820443487993987/746070591097798688/821864991270633512 |
@jszwedko As part of #5973, the
I would preference this to be a warning instead of a hard-coded value. Perhaps this should be included as part of the documentation rather than limiting the performance of vector? In addition, there is progress in Loki to provide some leniency around out-of-order log submissions. Is there a chance we could get |
Broken off from #5973 (comment)
Right now users are limited to only sending one HTTP request at a time to Loki due to the ordering requirements Loki has around streams which result in possible failures when requests are issued in parallel for a given stream. This constrains the throughput vector is able to achieve.
#5973 improves this situation by letting users modify the behavior on failure by, for example, rewriting the timestamps but I think it would be advantageous to modify the
loki
sink to batch partition HTTP requests by stream to increase throughput by allowing multiple HTTP requests for different streams to be issued simultaneously.The text was updated successfully, but these errors were encountered: