For the simplest Elasticsearch & Kibana stack monitoring setup from a kibana clone, using internal collection, first start elasticsearch with monitoring and a local exporter enabled.
yarn es snapshot --license trial \
-E xpack.monitoring.collection.enabled=true \
-E xpack.monitoring.exporters.id0.type=local
Then start kibana:
yarn start
Open kibana and navigate to "Stack Monitoring" (sidebar, homepage, or search bar). You should see a page like this.
This is definitely the simplest way to get some data to explore, but internal collection is a deprecated collection mode, so next we'll use metricbeat collection.
To set up stack monitoring with metricbeat collection, first start elasticsearch with a trial license.
yarn es snapshot --license trial
Next, we'll need to give kibana a fixed base url so metricbeat can query it. So add this to your kibana.dev.yml
file:
server.basePath: '/ftw'
Then start kibana:
yarn start
Next start metricbeat. Any method of installing metricbeat works fine. We'll use docker since it is a good common point regardless of your development OS.
docker run --name metricbeat \
--pull always --rm \
--hostname=metricbeat \
--publish=5066:5066 \
--volume="$(pwd)/x-pack/plugins/monitoring/dev_docs/reference/metricbeat.yarn.yml:/usr/share/metricbeat/metricbeat.yml:ro" \
docker.elastic.co/beats/metricbeat:master-SNAPSHOT
Regardless of the metrics collection method, logs will get collected using filebeat.
Similar to metricbeat, any method of installing filebeat works fine. We'll use docker again here as a good common point.
docker run --name filebeat \
--pull always --rm \
--hostname=filebeat \
--publish=5067:5067 \
--volume="$(pwd)/.es:/es:ro" \
--volume="$(pwd)/x-pack/plugins/monitoring/dev_docs/reference/filebeat.yarn.yml:/usr/share/filebeat/filebeat.yml:ro" \
docker.elastic.co/beats/filebeat:master-SNAPSHOT
The "Standalone Cluster" entry appears in Stack Monitoring when there are monitoring documents that lack a cluster_uuid
. Beats will send these in some timing/failure cases, but the easiest way to generate them intentionally to start a logstash node with monitoring enabled and no elasticsearch output.
For example using docker and metricbeat collection:
docker run --name logstash \
--pull always --rm \
--hostname=logstash \
--publish=9600:9600 \
--volume="$(pwd)/x-pack/plugins/monitoring/dev_docs/reference/logstash.yml:/usr/share/logstash/config/logstash.yml:ro" \
--volume="$(pwd)/x-pack/plugins/monitoring/dev_docs/reference/pipelines.yml:/usr/share/logstash/config/pipelines.yml:ro" \
docker.elastic.co/logstash/logstash:master-SNAPSHOT
Note that you can add these arguments to populate cgroup/cfs data for logstash as well. This will require a cgroup v1 docker host until logstash#14534 is resolved:
--cpu-period=100000 \
--cpu-quota=150000 \
We also maintain an internal docker-compose setup for running a full stack with monitoring enabled for all components.
See (internal) https://github.com/elastic/observability-dev/tree/main/tools/docker-testing-cluster for more details.
For some types of changes (for example, new fields, templates, endpoints or data processing logic), you may want to run stack components from source.
See Running Components from Source for details on how to do this for each component.