-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide bytesize filtering capability #1115
Comments
It seems like there are several options that could be pursued her from the beats side:
A third option could be routing of events to different instances of LS. Currently from my point of view I don't feel like that is something that should happen on the beats side as when it comes to sending events we should keep the complexity to a minimum on the beats side. |
This should be easily done with generic filtering feature: #451 |
Any progress on this? This will help users who want the ability to filter out, tag and route occasional large/gigantic events closer to the source. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue doesn't have a |
Discussed with the LS team earlier today. There are cases where the events coming in can be of varied sizes, from the regular 3-4Kb per event, to something that is potentially huge like 100+Mb (eg. application log dump of a xml response). While some users may be able to simply drop the events, some users do have to retain them for compliancy. One optimization idea being implemented is to create a custom ruby filter in LS to calculate the bytesize of the message field of an event, and send it through to separate topics/lists in a queuing system and have separate LS indexers to handle abnormally large events and have other LS indexers handle the regular sized events. @suyograo suggests filing an enhancement request for beats for it is best to perform such byte-size filtering further upstream.
The text was updated successfully, but these errors were encountered: