From cb561a85db28494718070f02bcf080728a48f89b Mon Sep 17 00:00:00 2001 From: Stef Nestor <26751266+stefnestor@users.noreply.github.com> Date: Tue, 22 Oct 2024 10:30:46 -0600 Subject: [PATCH 1/3] (Doc+) Rolling changes high availability is per ES data tier MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 👋 howdy, team! Can we call out that our recommendation for updates applying to highly available clusters should be multiple nodes per data tier and not just multiple nodes. (Multiple nodes would potentially keep responsiveness as long as master-eligible, but this avoids `status:red` search/ingestion issues.) --- .../elasticsearch/orchestration.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc index d146da4c3c..6f65342287 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc @@ -149,7 +149,7 @@ Due to relying on Kubernetes primitives such as StatefulSets, the ECK orchestrat ** Single-node clusters ** Clusters containing indices with no replicas -If an Elasticsearch node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. Clusters with more than one node and at least one replica per index are recommended. +If an {es} node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. link:{ref}/high-availability.html[Highly available] clusters are recommended, including having more than one node per link:{ref}/data-tiers.html[data tier] and at least one replica per index. * Elasticsearch Pods may stay `Pending` during a rolling upgrade if the Kubernetes scheduler cannot re-schedule them back. This is especially important when using local PersistentVolumes. If the Kubernetes node bound to a local PersistentVolume does not have enough capacity to host an upgraded Pod which was temporarily removed, that Pod will stay `Pending`. From 79c7dbdeec8d3e0bc49cc888f20cbad15d877318 Mon Sep 17 00:00:00 2001 From: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com> Date: Tue, 22 Oct 2024 13:21:25 -0400 Subject: [PATCH 2/3] Update docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc Co-authored-by: Stef Nestor <26751266+stefnestor@users.noreply.github.com> --- .../elasticsearch/orchestration.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc index 6f65342287..8aec4e42fa 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc @@ -149,7 +149,7 @@ Due to relying on Kubernetes primitives such as StatefulSets, the ECK orchestrat ** Single-node clusters ** Clusters containing indices with no replicas -If an {es} node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. link:{ref}/high-availability.html[Highly available] clusters are recommended, including having more than one node per link:{ref}/data-tiers.html[data tier] and at least one replica per index. +If an {es} node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. link:{ref}/high-availability-cluster-design.html[Highly available] clusters are recommended, including having more than one node per link:{ref}/data-tiers.html[data tier] and at least one replica per index. * Elasticsearch Pods may stay `Pending` during a rolling upgrade if the Kubernetes scheduler cannot re-schedule them back. This is especially important when using local PersistentVolumes. If the Kubernetes node bound to a local PersistentVolume does not have enough capacity to host an upgraded Pod which was temporarily removed, that Pod will stay `Pending`. From 39757ad733eec87313a1f6551e77bb4ea48b5a3d Mon Sep 17 00:00:00 2001 From: Stef Nestor <26751266+stefnestor@users.noreply.github.com> Date: Thu, 24 Oct 2024 07:32:27 -0600 Subject: [PATCH 3/3] feedback Co-authored-by: Peter Brachwitz --- .../elasticsearch/orchestration.asciidoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc index 8aec4e42fa..369becce45 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/orchestration.asciidoc @@ -149,7 +149,7 @@ Due to relying on Kubernetes primitives such as StatefulSets, the ECK orchestrat ** Single-node clusters ** Clusters containing indices with no replicas -If an {es} node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. link:{ref}/high-availability-cluster-design.html[Highly available] clusters are recommended, including having more than one node per link:{ref}/data-tiers.html[data tier] and at least one replica per index. +If an {es} node holds the only copy of a shard, this shard becomes unavailable while the node is upgraded. To ensure link:{ref}/high-availability-cluster-design.html[high availability] it is recommended to configure clusters with three master nodes, more than one node per link:{ref}/data-tiers.html[data tier] and at least one replica per index. * Elasticsearch Pods may stay `Pending` during a rolling upgrade if the Kubernetes scheduler cannot re-schedule them back. This is especially important when using local PersistentVolumes. If the Kubernetes node bound to a local PersistentVolume does not have enough capacity to host an upgraded Pod which was temporarily removed, that Pod will stay `Pending`.