-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use persistent storage by default for ES data #1028
Conversation
The new names are defined as consts to provide a slightly better overview.
Adds `-internal` to several things that should be considered internals. Removes some no longer used (and mislabeled) "version specific" resources, such as a generic ConfigMap and Secret. Internal mounts are now mounted to directories under `/mnt/elastic-internal`. Internal users has the `elastic-internal-` prefix and roles has the `elastic_internal_` prefix. Since data and logs directories for ES are not considered internals, they now have the canonical names `elasticsearch-data` and `elasticsearch-logs` (previously just `data` and `logs`) Example Volume Mounts: ``` /mnt/elastic-internal/elasticsearch-config-managed from elastic-internal-elasticsearch-config-managed (ro) /mnt/elastic-internal/keystore-user from elsatic-internal-keystore-user (ro) /mnt/elastic-internal/probe-user from elastic-internal-probe-user (ro) /mnt/elastic-internal/process-manager from elastic-internal-process-manager (rw) /mnt/elastic-internal/secure-settings from elastic-internal-secure-settings (ro) /mnt/elastic-internal/unicast-hosts from elastic-internal-unicast-hosts (ro) /usr/share/elasticsearch/bin from elastic-internal-elasticsearch-bin-local (rw) /usr/share/elasticsearch/config from elastic-internal-elasticsearch-config-local (rw) /usr/share/elasticsearch/config/http-certs from elastic-internal-http-certificates (ro) /usr/share/elasticsearch/config/transport-certs from elastic-internal-transport-certificates (ro) /usr/share/elasticsearch/data from elasticsearch-data (rw) /usr/share/elasticsearch/logs from elasticsearch-logs (rw) /usr/share/elasticsearch/plugins from elastic-internal-elasticsearch-plugins-local (rw) ``` Example Volumes ``` elastic-internal-elasticsearch-config-local: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> elastic-internal-elasticsearch-plugins-local: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> elastic-internal-elasticsearch-bin-local: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> elasticsearch-data: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> elasticsearch-logs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> elastic-internal-process-manager: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> elastic-internal-unicast-hosts: Type: ConfigMap (a volume populated by a ConfigMap) Name: elasticsearch-sample-es-unicast-hosts Optional: false elastic-internal-probe-user: Type: Secret (a volume populated by a Secret) SecretName: elasticsearch-sample-es-internal-users Optional: false elsatic-internal-keystore-user: Type: Secret (a volume populated by a Secret) SecretName: elasticsearch-sample-es-internal-users Optional: false elastic-internal-secure-settings: Type: Secret (a volume populated by a Secret) SecretName: elasticsearch-sample-es-secure-settings Optional: false elastic-internal-http-certificates: Type: Secret (a volume populated by a Secret) SecretName: elasticsearch-sample-es-http-certs-internal Optional: false elastic-internal-transport-certificates: Type: Secret (a volume populated by a Secret) SecretName: elasticsearch-sample-es-xdcbcdndf4-certs Optional: false elastic-internal-elasticsearch-config-managed: Type: Secret (a volume populated by a Secret) SecretName: elasticsearch-sample-es-xdcbcdndf4-config Optional: false ```
By default they will get a 1Gi volume, which is chosen because matches roughly 1:1 vs the default heap. Users can opt out of this behavior by specifying the data volume in the Elasticsearch resource directly ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1alpha1 kind: Elasticsearch metadata: name: elasticsearch-sample spec: version: "7.1.0" nodes: - nodeCount: 1 podTemplate: spec: volumes: - name: elasticsearch-data emptyDir: {} ``` Builds on top of elastic#1024 Closes: elastic#913
}, | ||
Resources: corev1.ResourceRequirements{ | ||
Requests: corev1.ResourceList{ | ||
corev1.ResourceStorage: resource.MustParse("1Gi"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if this is big enough as a default. Could we get into a situation where many users start with a simple copy/pasted spec, get their 1Gi storage full and have basically lost their data at this point?
I'm seeing Helm Charts default to 30Gi, maybe we should align here?
Not sure that would fit Minikube very well though :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1Gi works well with Minikube etc, and doesn't accidentally incur a large cost. It's probably best if users are conscious about this number in any production scenario.
@@ -33,6 +33,9 @@ const ( | |||
UnicastHostsFile = "unicast_hosts.txt" | |||
|
|||
ProcessManagerEmptyDirMountPath = "/mnt/elastic-internal/process-manager" | |||
|
|||
ElasticsearchDataVolumeName = "elasticsearch-data" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably update all samples and the quickstart doc to use this name in the VolumeClaimTemplates section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Already updated that in the "normalization" PR.
Jenkins test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
ℹ️ Builds on top of #1024, only the last commit is relevant for this specific PR.
By default they will get a 1Gi volume, which is chosen because matches roughly 1:1 vs the default heap.
Users can opt out of this behavior by specifying the data volume in the Elasticsearch resource directly
Closes: #913