This repository describes the deployment of the ELK Stack (Elasticsearch, Logstash, and Kibana) on an Amazon EKS (Elastic Kubernetes Service) cluster using Terraform and Kubernetes manifests.
Note: Another way to deploy ELK is to use AWS Opensearch Service.
Before deploying the ELK stack, ensure that you have the following installed and configured.
-
AWS CLI: Install the AWS CLI and configure it with the appropriate AWS credentials.
-
kubectl: Install kubectl for managing Kubernetes resources.
-
Helm: Install Helm to deploy Kubernetes applications.
-
Terraform: Install Terraform for provisioning AWS infrastructure.
-
eksctl: Install eksctl for creating and managing EKS clusters.
- VPC Module: Provisions a custom VPC, subnets, and associated networking components.
- EKS Module: Provisions an EKS cluster with managed node groups, IAM roles, and necessary permissions.
-
Elasticsearch: Single-node or multi-node Elasticsearch cluster, persistent storage, and configuration.
-
Logstash: Configured to collect and ship logs to Elasticsearch.
-
Kibana: Accessible via an external IP (e.g., through a LoadBalancer service).
The first step is to create the EKS cluster using the provided Terraform scripts.
Please create following if not exists
- VPC (Virtual Private Cloud) and Subnets
- Security group
- IAM roles for for EKS control plane operations
- KMS keys for volume data encryption
- EKS Cluster
cd terraform
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"
Attribute | Details | Used by |
---|---|---|
S3 Access | brics-bi-s3-key | |
EBS Encryption | brics-bi-k8s-ebs-key | attached volumes |
ETCD Encryption | brics-bi-etcd-key | brics-bi-k8s |
Attribute | Details | Used by |
---|---|---|
EKS Cluster | brics-bi-cluster-role | brics-bi-k8s |
Node Group IAM Role | brics_bi_node_group_role | brics-bi-workers |
EKS EBS CSI Driver | bricsbiEksEbsCsiDriverRole | attached volumes |
Attribute | Details | Used by |
---|---|---|
VPC | inet-accessible | brics-bi-k8s |
Security Group | frontdoor-default-sg | brics-bi-k8s |
Attribute | Details |
---|---|
Cluster Name | brics-bi-k8s |
Kubernetes Version | 1.30 |
VPC Module Version | 5.13.0 |
EKS Module Version | 20.24.0 |
Security Group | bi-k8s-security-group |
Access Method | Via Bastion Server and Direct Access |
Namespace for ELK | bi-elk |
StorageClass | bi-k8s-gp2 (Provisioner: kubernetes.io/aws-ebs) |
VolumeBindingMode | WaitForFirstConsumer |
Add-ons | EBS CSI Driver |
ELK Setup | Elasticsearch, Kibana, Logstash |
ELK Access | Kibana (Internet Access), Elasticsearch (Internal) |
Current Status | Cluster Accessible, ELK Deployed |
aws eks --region <region> update-kubeconfig --name <cluster-name>
After the cluster is ready, deploy the ELK stack using Kubernetes manifests, suggested order: Kibana, ElasticSearch and Logstash. Apply files in following directory.
kubectl apply -f elk-k8s/kibana/
kubectl apply -f elk-k8s/elasticsearch/
kubectl apply -f elk-k8s/logstash/
Kibana can be accessed through the LoadBalancer’s external IP. Retrieve the IP from following command.
kubectl get svc -n <namespace>
To make ELK work, we need to configure them to make sure:
- Logstash can send data to ElasticSearch
- ElasticSearch can build index
- Kibana can access indexed data from ElasticSearch
Configuration options (e.g., elasticsearch.yml) can be customized in the elasticsearch/ directory. So, prepare elasticsearch.yml and make sure it refers in the deployment, then redeploy.
kubectl apply -f elk-k8s/elasticsearch/
Kibana is exposed externally via load balancer, will need to restrict access or enable additional security settings (e.g., HTTPS).
Logstash configuration (logstash.conf) is essential to get data collected. And it is the central piece to diagnose ELK stack.
To have a customised configuration, apply configmap
kubectl apply -f elk-k8s/elasticsearch/logstash-configmap.yaml
kubectl apply -f elk-k8s/elasticsearch/logstash-deployment.yaml
To easily transfer data from bastion to pod volume, use a pvc-busybox to do so.
kubectl apply -f pvc-busybox.yaml
kubectl cp tickets.json pvc-busybox:/mnt/logstash/tickets.json -n bi-elk
After transfer, the data should be available for logstash pod at the /usr/share/logstash/tickets/
-
Kibana/Elasticsearch connectivity: Ensure that Kibana and Elasticsearch are deployed in the same namespace and that there are no firewall or network issues between them.
-
Persistent Volumes: Ensure that the EBS volumes are correctly bound to Elasticsearch and Logstash.
-
After data is transferred, watch logstash log to make sure it parses the file properly.
-
If logstash pushes data correctly to ElasticSearch, the index should be built, but it does take a few minutes.
- Delete PVCs and PVs
- Delete custom resources, namespaces and applications
- Remove managed node groups and add-ons
- Clean up networking components and any LoadBalancers
- Delete the EKS Cluster
- Try AWS OpenSearch Service
- Deploy Solution Assessment
- Security Control Management
- Connecting to Zammad API
- Connecting to Waldur API
- Dashboard Design
- Explore Other Data Sources
This section is to use AWS OpenSearch Service (formerly known as Amazon Elasticsearch Service) for an ELK (Elasticsearch, Logstash, and Kibana) setup.
Create an IAM role and its policies for the domain as following:
Attribute | Details |
---|---|
User | elastic-master-user |
Policies | AmazonESFullAccess, OpensearchAccess |
Console access | Disabled |
Access Key 1 | Created |
Follow this link to create OpenSearch Service domain
Set up rule as following for IAM user:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:ESHttpGet",
"es:ESHttpPost",
"es:ESHttpPut",
"es:ESHttpDelete"
],
"Resource": "arn:aws:es:<Your-Region>:<Your-Account>:<Your-Domain>>*"
}
]
}
For IAM-based access to OpenSearch, we use IAM policies and AWS Signature Version 4 signing to authenticate requests made to the OpenSearch dashboard.
-
Install the AWS Signer Browser extension for your browser (available for Chrome and Firefox).
-
Open the extension settings and provide your AWS Access Key and Secret Key for the IAM user you are using to access OpenSearch.
-
After configuring the credentials in the extension, try accessing the OpenSearch dashboard again. The extension will sign your requests automatically using AWS SigV4.
Attribute | Details |
---|---|
Key | Your-IAM-User-Key |
Secret | Your-IAM-User-Secret |
Host Patterns | https://your-Opensearch-domain.aws/* |
Defined Services | See below |
Set defined services:
[
{
"region": "your-region",
"service": "es",
"host": "*"
}
]
cd scripts
cp example.env .env
Set credentials variables in .env file, and then set up python env to run
cd scripts
python3 -m venv env
source env/bin/activate
pip3 install -r requirements.txt
python3 opensearch.py create-index --index zammad-dev-tickets
Then check if index is created via cmd or AWS OpenSearch portal
curl https://<your-opensearch-domain>/_cat/indices?v
cd scripts
source env/bin/activate
python3 zammad.py get-tickets --output ../data/zammad-dev.json
cd scripts
source env/bin/activate
python3 zammad.py get-tickets --output ../data/zammad-dev.json
python3 opensearch.py upload-file --index zammad-dev-tickets --file ../data/zammad-dev.json
cd scripts
source env/bin/activate
python3 opensearch.py csv-to-json --csv my-file.csv --json my-file.json
Then use upload-file command to update the json file to the index.
curl https://<your-opensearch-domain>/zammad-dev-tickets/_search
Sometime the index upload creats duplicates, you can clear the index before upload a new file.
python3 opensearch.py clear-index --index zammad-dev-tickets