You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use this filter to reverse-lookup every IP in our events, but it has become a huge bottleneck.
When the lookup is successful, it gets cached by our DNS server and is generally not a bottleneck. But when it fails, the timeout parameter hits its limit and the event continues to the next filter in our pipeline (and it never gets cached at the DNS server).
The problem is that a sane timeout of 2 seconds (to allow time for successful lookups) is still way too long. e.g. when you are processing 100 events per second, even if only 5% of those events have failed lookups, it will take 10 seconds to process 1 second worth of events.
I'd propose adding a short-term (expiring) cache that remembers failed lookups and doesn't try them again for a configurable amount of time. This obviously only helps when the IPs being processed are somewhat repetitive (as they are in our case, and I'd guess most other use-cases).
The text was updated successfully, but these errors were encountered:
I use this filter to reverse-lookup every IP in our events, but it has become a huge bottleneck.
When the lookup is successful, it gets cached by our DNS server and is generally not a bottleneck. But when it fails, the
timeout
parameter hits its limit and the event continues to the next filter in our pipeline (and it never gets cached at the DNS server).The problem is that a sane timeout of 2 seconds (to allow time for successful lookups) is still way too long. e.g. when you are processing 100 events per second, even if only 5% of those events have failed lookups, it will take 10 seconds to process 1 second worth of events.
I'd propose adding a short-term (expiring) cache that remembers failed lookups and doesn't try them again for a configurable amount of time. This obviously only helps when the IPs being processed are somewhat repetitive (as they are in our case, and I'd guess most other use-cases).
The text was updated successfully, but these errors were encountered: