-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature - Reverse DNS Enrichment #7770
Comments
I have the same need locally :) |
Ooh, I love the enrichment for private addresses :-) The classification of the IP address in itself* would also be great to have as part of the event. Do we already have a way to do this classification in Beats? Or were you just laying out an expressive example and we don't actually have this yet? I want that as part of the events because down the line, I can then do my GeoIP and reverse lookups centrally only on the public ones :-) * By classification I mean: is it local, private, multicast, public or other. |
The configuration is just an idea at the moment. But you mention a use case I had not considered -- adding a classification to the event itself. I was only considering using that information in the processor's conditional logic rather than adding it to the event. I'll take that into consideration. |
In case this helps, in the draft for Suricata in Logstash, I winged the classification based on a few Wikipedia resources, and skipped multicast & carrier grade networks (they were incorrectly flagged public, the fallback if it didn't match the local or private CIDRs). Here's what I used:
There must be a better way than this, though. Do you think Go network libraries can help directly with this? In case they don't here's a few of the resources I used to wing it :-)
|
I have an implementation of the condition that I was experimenting with that allows for named classes in addition to custom CIDRs. andrewkroh@8ca8a03 |
Ah great, looks like Go libs are helping you classify them. So what I called public would be "unicast" and "global_unicast", in there, right? I wonder if for tagging the events we shouldn't map all of these back to something simpler. Here's an example:
We'd lose a bit details, but I think for the purposes of operational and security monitoring, the distinction between all of the local types of communications is not that interesting. |
Package dns implements a processor that can perform DNS lookups by sending a DNS request over UDP to a recursive nameserver. Each instance of the processor is independent (no shared cache) so it's best to only define one instance of the processor. It caches DNS results in memory and honors the record's TTL. It also caches failures for the configured failure TTL. This filter, like all filters, only processes 1 event at a time, so the use of this plugin can significantly slow down your pipeline’s throughput if you have a high latency network. By way of example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve with a single filter worker is 500 events per second (1000 milliseconds / 2 milliseconds). Simple config example: ``` processors: - dns.lookup: - type: reverse action: append fields: ip: server.hostname client_ip: client.hostname ``` Full config example: ``` processors: - dns: lookup: - type: reverse action: append fields: ip: hostname client_ip: client_hostname timeout: 500ms success_cache: capacity.initial: 1000 capacity.max: 10000 failure_cache: capacity.initial: 1000 capacity.max: 10000 ttl: 1m nameservers: ['1.1.1.1', '8.8.8.8'] ``` Closes elastic#7770
Package dns implements a processor that can perform DNS lookups by sending a DNS request over UDP to a recursive nameserver. Each instance of the processor is independent (no shared cache) so it's best to only define one instance of the processor. It caches DNS results in memory and honors the record's TTL. It also caches failures for the configured failure TTL. This filter, like all filters, only processes 1 event at a time, so the use of this plugin can significantly slow down your pipeline’s throughput if you have a high latency network. By way of example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve with a single filter worker is 500 events per second (1000 milliseconds / 2 milliseconds). Simple config example: ``` processors: - dns: - type: reverse action: append fields: ip: server.hostname client_ip: client.hostname ``` Full config example: ``` processors: - dns: - type: reverse action: append fields: ip: hostname client_ip: client_hostname timeout: 500ms success_cache: capacity.initial: 1000 capacity.max: 10000 failure_cache: capacity.initial: 1000 capacity.max: 10000 ttl: 1m nameservers: ['1.1.1.1', '8.8.8.8'] ``` Closes #7770 This also updates golang/x/net to let us build correctly on netbsd/arm, which was failing with some of the new includes this requires.
Package dns implements a processor that can perform DNS lookups by sending a DNS request over UDP to a recursive nameserver. Each instance of the processor is independent (no shared cache) so it's best to only define one instance of the processor. It caches DNS results in memory and honors the record's TTL. It also caches failures for the configured failure TTL. This filter, like all filters, only processes 1 event at a time, so the use of this plugin can significantly slow down your pipeline’s throughput if you have a high latency network. By way of example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve with a single filter worker is 500 events per second (1000 milliseconds / 2 milliseconds). Simple config example: ``` processors: - dns: - type: reverse action: append fields: ip: server.hostname client_ip: client.hostname ``` Full config example: ``` processors: - dns: - type: reverse action: append fields: ip: hostname client_ip: client_hostname timeout: 500ms success_cache: capacity.initial: 1000 capacity.max: 10000 failure_cache: capacity.initial: 1000 capacity.max: 10000 ttl: 1m nameservers: ['1.1.1.1', '8.8.8.8'] ``` Closes elastic#7770 This also updates golang/x/net to let us build correctly on netbsd/arm, which was failing with some of the new includes this requires. (cherry picked from commit 3b89051)
Package dns implements a processor that can perform DNS lookups by sending a DNS request over UDP to a recursive nameserver. Each instance of the processor is independent (no shared cache) so it's best to only define one instance of the processor. It caches DNS results in memory and honors the record's TTL. It also caches failures for the configured failure TTL. This filter, like all filters, only processes 1 event at a time, so the use of this plugin can significantly slow down your pipeline’s throughput if you have a high latency network. By way of example, if each DNS lookup takes 2 milliseconds, the maximum throughput you can achieve with a single filter worker is 500 events per second (1000 milliseconds / 2 milliseconds). Simple config example: ``` processors: - dns: - type: reverse action: append fields: ip: server.hostname client_ip: client.hostname ``` Full config example: ``` processors: - dns: - type: reverse action: append fields: ip: hostname client_ip: client_hostname timeout: 500ms success_cache: capacity.initial: 1000 capacity.max: 10000 failure_cache: capacity.initial: 1000 capacity.max: 10000 ttl: 1m nameservers: ['1.1.1.1', '8.8.8.8'] ``` Closes #7770 This also updates golang/x/net to let us build correctly on netbsd/arm, which was failing with some of the new includes this requires. (cherry picked from commit 3b89051)
As a user I would like to be able to resolve IP addresses in events to hostnames by using reverse DNS lookups (query PTR records). My hosts send data directly to Elastic Cloud. Many (but not all) of the IP addresses in my events are private IP addresses that are only resolvable via internal DNS servers (so the enrichment cannot be done in Elastic Cloud).
Ideally I'd like it to be possible to only enrich events associated with my private network. So a condition that works on IP ranges would be nice, but I could also use a regex (less ideal) to an IP prefix.
The text was updated successfully, but these errors were encountered: