Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

[elasticsearch] make service configurable #123

Merged
merged 12 commits into from
Jun 12, 2019
30 changes: 16 additions & 14 deletions elasticsearch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ This helm chart is a lightweight way to configure and run our official [Elastics
* 1GB of RAM for the JVM heap

## Usage notes and getting started

* This repo includes a number of [example](./examples) configurations which can be used as a reference. They are also used in the automated testing of this chart
* Automated testing of this chart is currently only run against GKE (Google Kubernetes Engine). If you are using a different Kubernetes provider you will likely need to adjust the `storageClassName` in the `volumeClaimTemplate`
* The default storage class for GKE is `standard` which by default will give you `pd-ssd` type persistent volumes. This is network attached storage and will not perform as well as local storage. If you are using Kubernetes version 1.10 or greater you can use [Local PersistentVolumes](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/local-ssd) for increased performance
Expand Down Expand Up @@ -52,7 +53,6 @@ While only the latest releases are tested, it is possible to easily install old
helm install --name elasticsearch elastic/elasticsearch --version 7.1.1 --set imageTag=7.1.1
```


## Configuration

| Parameter | Description | Default |
Expand Down Expand Up @@ -89,6 +89,8 @@ helm install --name elasticsearch elastic/elasticsearch --version 7.1.1 --set im
| `protocol` | The protocol that will be used for the readinessProbe. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html#_settings) in `extraEnvs` | `9200` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html#_transport_settings) in `extraEnvs` | `9300` |
| `service.type` | Type of elasticsearch service. [Service Types](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) | `ClusterIP` |
| `service.annotations` | Annotations that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` [Annotations](https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws) | `{}` |
| `updateStrategy` | The [updateStrategy](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets) for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `maxUnavailable` | The [maxUnavailable](https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget) value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `fsGroup` | The Group ID (GID) for [securityContext.fsGroup](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) so that the Elasticsearch user can read from the persistent volume | `1000` |
Expand Down Expand Up @@ -127,7 +129,7 @@ make

A cluster with X-Pack security enabled

* Generate SSL certificates following the [official docs]( https://www.elastic.co/guide/en/elasticsearch/reference/6.7/configuring-tls.html#node-certificates)
* Generate SSL certificates following the [official docs](https://www.elastic.co/guide/en/elasticsearch/reference/6.7/configuring-tls.html#node-certificates)
* Create Kubernetes secrets for authentication credentials and certificates
```
kubectl create secret generic elastic-credentials --from-literal=password=changeme --from-literal=username=elastic
Expand All @@ -139,6 +141,7 @@ A cluster with X-Pack security enabled
make
```
* Attach into one of the containers

```
kubectl exec -ti $(kubectl get pods -l release=helm-es-security -o name | awk -F'/' '{ print $NF }' | head -n 1) bash
```
Expand Down Expand Up @@ -178,17 +181,17 @@ There are a couple reasons we recommend this.
#### How to use the keystore?

1. Create a Kubernetes secret containing the [keystore](https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-settings.html)
```
$ kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore
```
```
$ kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore
```
2. Mount it into the container via `secretMounts`
```
secretMounts:
- name: elasticsearch-keystore
secretName: elasticsearch-keystore
path: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
```
```
secretMounts:
- name: elasticsearch-keystore
secretName: elasticsearch-keystore
path: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
```

#### How to enable snapshotting?

Expand All @@ -197,7 +200,6 @@ There are a couple reasons we recommend this.
3. Configure the [snapshot repository](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html) as you normally would.
4. To automate snapshots you can use a tool like [curator](https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html). In the future there are plans to have Elasticsearch manage automated snapshots with [Snapshot Lifecycle Management](https://github.com/elastic/elasticsearch/issues/38461).


### Local development environments

This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally.
Expand All @@ -218,7 +220,6 @@ make

Note that if `helm` or `kubectl` timeouts occur, you may consider creating a minikube VM with more CPU cores or memory allocated.


#### Docker for Mac - Kubernetes

It is also possible to run this chart with the built in Kubernetes cluster that comes with [docker-for-mac](https://docs.docker.com/docker-for-mac/kubernetes/).
Expand Down Expand Up @@ -266,6 +267,7 @@ make test
Integration tests are run using [goss](https://github.com/aelsabbahy/goss/blob/master/docs/manual.md) which is a serverspec like tool written in golang. See [goss.yaml](examples/default/test/goss.yaml) for an example of what the tests look like.

To run the goss tests against the default example:

```
cd examples/default
make goss
Expand Down
13 changes: 10 additions & 3 deletions elasticsearch/templates/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,15 @@ kind: Service
apiVersion: v1
metadata:
name: {{ template "uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
Expand All @@ -26,11 +34,10 @@ metadata:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "{{ template "uname" . }}"
annotations:
# Create endpoints also if the related pod isn't ready
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
Copy link
Contributor Author

@kimxogus kimxogus May 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This annotation is deprecated kubernetes/kubernetes#63742

spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
kimxogus marked this conversation as resolved.
Show resolved Hide resolved
selector:
app: "{{ template "uname" . }}"
ports:
Expand Down
4 changes: 2 additions & 2 deletions elasticsearch/templates/statefulset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -244,8 +244,8 @@ spec:

cleanup () {
while true ; do
local master="$(http "/_cat/master?h=node")"
if [[ $master == "{{ template "uname" . }}"* && $master != "${NODE_NAME}" ]]; then
local master="$(http "/_cat/master?h=node" || echo "")"
kimxogus marked this conversation as resolved.
Show resolved Hide resolved
if [[ $master == "{{ template "masterService" . }}"* && $master != "${NODE_NAME}" ]]; then
echo "This node is not master."
break
fi
Expand Down
3 changes: 3 additions & 0 deletions elasticsearch/tests/elasticsearch_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,8 @@ def test_defaults():
# Service
s = r['service'][uname]
assert s['metadata']['name'] == uname
assert s['metadata']['annotations'] == {}
assert s['spec']['type'] == 'ClusterIP'
assert len(s['spec']['ports']) == 2
assert s['spec']['ports'][0] == {
'name': 'http', 'port': 9200, 'protocol': 'TCP'}
Expand All @@ -161,6 +163,7 @@ def test_defaults():
# Headless Service
h = r['service'][uname + '-headless']
assert h['spec']['clusterIP'] == 'None'
assert h['spec']['publishNotReadyAddresses'] == True
assert h['spec']['ports'][0]['name'] == 'http'
assert h['spec']['ports'][0]['port'] == 9200
assert h['spec']['ports'][1]['name'] == 'transport'
Expand Down
4 changes: 4 additions & 0 deletions elasticsearch/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,10 @@ protocol: http
httpPort: 9200
transportPort: 9300

service:
type: ClusterIP
annotations: {}

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
Expand Down