Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logstash Unhealthy #522

Closed
devilman85 opened this issue Dec 5, 2024 · 11 comments
Closed

Logstash Unhealthy #522

devilman85 opened this issue Dec 5, 2024 · 11 comments
Labels
invalid This doesn't seem right logstash Relating to Malcolm's use of Logstash

Comments

@devilman85
Copy link

Although I have the logs in my elasticsearch cluster and visible on kibana I have the logstash container unhealthy. I share the container log file to understand together what I need to change and if I need to change... help me please
malcolm-logstash-logs.txt

@devilman85 devilman85 added the bug Something isn't working label Dec 5, 2024
@mmguero mmguero added this to Malcolm Dec 5, 2024
@mmguero
Copy link
Collaborator

mmguero commented Dec 5, 2024

The lines up until 509 in that .txt file just indicate the logstash pipeline is starting up and is not available yet. Once you hit this line:

[2024-12-05T11:59:20,899][INFO ][logstash.agent           ] Pipelines running {:count=>6, :running_pipelines=>[:"malcolm-input", :"malcolm-output", :"malcolm-suricata", :"malcolm-beats", :"malcolm-enrichment", :"malcolm-zeek"], :non_running_pipelines=>[]}

Everything is up and Logstash should be health and ready to go after that point.

@mmguero mmguero closed this as not planned Won't fix, can't repro, duplicate, stale Dec 5, 2024
@github-project-automation github-project-automation bot moved this to Done in Malcolm Dec 5, 2024
@mmguero mmguero added invalid This doesn't seem right and removed bug Something isn't working labels Dec 5, 2024
@mmguero mmguero moved this from Done to Invalid in Malcolm Dec 5, 2024
@mmguero mmguero added the logstash Relating to Malcolm's use of Logstash label Dec 5, 2024
@devilman85
Copy link
Author

It stays like this for hours and does not change status

@mmguero
Copy link
Collaborator

mmguero commented Dec 5, 2024

Stays like what? After you see that Pipelines running log at 2024-12-05T11:59:20,899 in your log, there's nothing else that you would expect to see in the logs. As long as you're seeing logs in your elasticsearch cluster and visible in Kibana there's nothing else you need to worry about with that output

@devilman85
Copy link
Author

In the sense that there hasn't been a status change since that log I shared, if you tell me that I don't have to worry too much since the data is coming to kibana I trust your advice.

One last info. Do you recommend that I also install Hedgehog Linux to have a better capture system? Now I just have Malcolm doing everything. Thank you for your attention

@mmguero
Copy link
Collaborator

mmguero commented Dec 5, 2024

That totally just depends on your needs. If you are only needing to capture from a single location/on a single interface, and the Malcolm machine seems to be handling the load okay, then you can stick with a single instance like you're doing. The main reason for a Hedgehog Linux sensor is to spread out your capture points or distribute the load of capture across different systems.

@devilman85
Copy link
Author

Attualmente catturo i dati da una span port che sullo switch manda direttamente al server dove è ospitato Malcolm su esxi con porta dedicata alla cattura del traffico. Dovrebbe arrivarmi a breve una tap.
My network is currently about 180 hosts.

Do you recommend implementing Hedgehog Linux in this situation?

@mmguero
Copy link
Collaborator

mmguero commented Dec 5, 2024

It's hard to judge solely based on the number of hosts, I think it's more related to these two things:

  • sensor placement: based on your comment, where you're doing it on an esxi port, it seems like you don't particularly need multiple capture points as long as you're seeing the data from all the hosts that you should be
  • load: as long as you Malcolm instance seems to be keeping up okay, you're probably fine with what you're doing today

One thing you could do, is for the zeek-live settings, set the ZEEK_DISABLE_STATS variable to false so that Zeek is generating capture statistics as it's capturing, and setting these two variables for the suricata-live settings to true so Suricata generates capture statistics. As you do this, then restart Malcolm, you'll be getting capture statistics. Then you can open the Packet Capture Statistics dashbaord (at the very bottom of the navigation pane for your dashboards in Kibana) to see if you're getting dropped packets and get an idea of the amount of traffic that's being seen by Malcolm.

@devilman85
Copy link
Author

Thank you very much for this information. I will check tomorrow and update you

@devilman85
Copy link
Author

I activated the variables but not having the malcolm-beats index populated gives me an error in the display: https://192.168.1.11:5601/c8b46e87c4d6/bundles/plugin/visTypeTimeseries/1.0.0/visTypeTimeseries.chunk.8.js: 1:11029
219/D/se<@https://192.168.1.11:5601/c8b46e87c4d6/bundles/plugin/visTypeTimeseries/1.0.0/visTypeTimeseries.chunk.14.js:1:5556

If I read correctly the dashboard is linked to the index and logs capture_loss.log.. correct me if I'm wrong

@mmguero
Copy link
Collaborator

mmguero commented Dec 6, 2024

Seems like it's possibly a version compatibility issue with that dashboard and your kibana, would be my guess. What version of kibana do you have?

But yeah, for zeek the relevant logs are capture_loss.log and stats.log:

  • capture_loss.log
        [
          "zeek.capture_loss.ts_delta",
          "host.name",
          "zeek.capture_loss.peer",
          "zeek.capture_loss.acks",
          "zeek.capture_loss.gaps",
          "zeek.capture_loss.percent_lost"
        ]
  • stats.log
[
          "host.name",
          "zeek.stats.peer",
          "zeek.stats.mem",
          "zeek.stats.pkts_link",
          "zeek.stats.pkts_proc",
          "zeek.stats.pkts_dropped",
          "zeek.stats.bytes_recv",
          "zeek.stats.tcp_conns",
          "zeek.stats.udp_conns",
          "zeek.stats.icmp_conns",
          "zeek.stats.files"
]
  • suricata
[
          "host.name",
          "suricata.stats.capture.kernel_packets",
          "suricata.stats.pkts_dropped",
          "suricata.stats.capture.errors",
          "suricata.stats.decoder.bytes",
          "suricata.stats.decoder.ethernet",
          "suricata.stats.decoder.ipv4",
          "suricata.stats.decoder.ipv6",
          "suricata.stats.detect.engines.rules_loaded",
          "suricata.stats.detect.alert"
]

So you could just look at those fields yourself in discover or whatever. See the zeek and suricata documentation for what those mean.

@devilman85
Copy link
Author

Kibana:8.16.1
Elasticsearch nodes: 8.16.1

Thanks for this hint

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right logstash Relating to Malcolm's use of Logstash
Projects
Status: Invalid
Development

No branches or pull requests

2 participants