Skip to content

Latest commit

 

History

History
118 lines (89 loc) · 14.9 KB

README.md

File metadata and controls

118 lines (89 loc) · 14.9 KB

Kibana Helm Chart

This functionality is in alpha status and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but alpha features are not subject to the support SLA of official GA features.

This helm chart is a lightweight way to configure and run our official Kibana docker image

Requirements

  • Kubernetes >= 1.8
  • Helm >= 2.8.0

Installing

  • Add the elastic helm charts repo
    helm repo add elastic https://helm.elastic.co
    
  • Install it
    helm install --name kibana elastic/kibana --version 7.0.0-alpha1
    

Compatibility

This chart is tested with the latest supported versions. The currently tested versions are:

5.x 6.x 7.x
5.6.16 6.7.1 7.0.0

Examples of installing older major versions can be found in the examples directory.

While only the latest releases are tested, it is possible to easily install old or new releases by overriding the imageTag. To install version 7.0.0 of Kibana it would look like this:

helm install --name kibana elastic/kibana --set imageTag=7.0.0

Configuration

Parameter Description Default
elasticsearchHosts The URLs used to connect to Elasticsearch. http://elasticsearch-master:9200
elasticsearchURL The URL used to connect to Elasticsearch. Deprecated, needs to be used for Kibana versions < 6.6
replicas Kubernetes replica count for the deployment (i.e. how many pods) 1
extraEnvs Extra environment variables which will be appended to the env: definition for the container {}
secretMounts Allows you easily mount a secret as a file inside the deployment. Useful for mounting certificates and other secrets. See values.yaml for an example {}
image The Kibana docker image docker.elastic.co/kibana/kibana
imageTag The Kibana docker image tag 7.0.0
imagePullPolicy The Kubernetes imagePullPolicy value IfNotPresent
resources Allows you to set the resources for the statefulset requests.cpu: 100m
requests.memory: 2Gi
limits.cpu: 1000m
limits.memory: 2Gi
protocol The protocol that will be used for the readinessProbe. Change this to https if you have server.ssl.enabled: true set http
kibanaConfig Allows you to add any config files in /usr/share/kibana/config/ such as kibana.yml. See values.yaml for an example of the formatting. {}
podSecurityContext Allows you to set the securityContext for the pod {}
serviceAccount Allows you to overwrite the "default" serviceAccount for the pod []
antiAffinityTopologyKey The anti-affinity topology key. By default this will prevent multiple Kibana instances from running on the same Kubernetes node kubernetes.io/hostname
antiAffinity Setting this to hard enforces the anti-affinity rules. If it is set to soft it will be done "best effort" hard
httpPort The http port that Kubernetes will use for the healthchecks and the service. 5601
maxUnavailable The maxUnavailable value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod 1
updateStrategy Allows you to change the default update strategy for the deployment. A standard upgrade of Kibana requires a full stop and start which is why the default strategy is set to Recreate Recreate
readinessProbe Configuration for the readinessProbe failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
imagePullSecrets Configuration for imagePullSecrets so that you can use a private registry for your image []
nodeSelector Configurable nodeSelector so that you can target specific nodes for your Kibana instances {}
tolerations Configurable tolerations []
ingress Configurable ingress to expose the Kibana service. See values.yaml for an example enabled: false

Examples

In examples/ you will find some example configurations. These examples are used for the automated testing of this helm chart

Default

Security

Testing

This chart uses pytest to test the templating logic. The dependencies for testing can be installed from the requirements.txt in the parent directory.

pip install -r ../requirements.txt
make test

You can also use helm template to look at the YAML being generated

make template

It is possible to run all of the tests and linting inside of a docker container

make test