This repository has been archived by the owner on May 16, 2023. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[elasticsearch] make service configurable #123
[elasticsearch] make service configurable #123
Changes from 5 commits
09d4e6b
8fcbca9
14aff5a
8ed5081
c490beb
5dc0e48
da6c398
dc4820c
0ceb09d
0e75063
d7bbd0a
1a911be
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This annotation is deprecated kubernetes/kubernetes#63742
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why change this? It just seems less safe than what was already in there. Yes it will fail if there is an empty string, however if the API returns anything else (or some kind of weird error) then it is going to be a non-empty string and this will pass.
Did you find an issue with the current logic that this fixes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I migrated es cluster from a cluster(all nodes have master, ingest and data role) to this chart. Previous master was not prefixed by
uname
, so I had to wait several minutes during rollout caused by configuration change when nodes from both old and new cluster exists before exclude old nodes from cluster shard allocation.I think this is unnecessary because
--fail
option is passed to curl(https://github.com/elastic/helm-charts/blob/master/elasticsearch/templates/statefulset.yaml#L239), script will exit if an error is returned from API($master
won't have a value).As
h=node
query param is passed,$master
string will only contain node name of master node if it's not empty.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was my explanation not enough? Then I can restore this because this only happens in specific migration case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for noticing this. This combination of
set -e
andcurl --fail
will mean that this script might exit early if the call fails. Ideally we want this loop to keep on running until we can see that a new master exists that isn't the current pod. Exiting early is not what we want to happen here.My concern is whether or not the API will return an error when there is no master or not. If the API returns a weird message like "no master yet" with a 200 then it's possible for the script to exit too early. Looking at the code or Elasticsearch it looks like it will actually return a dash (
-
) if there is no master found.Did it just mean that Kubernetes waited for the 120 second timeout while stopping each of the masters?
I think what actually makes a lot more sense is to check that it starts with
{{ template "masterService" . }}
. That way this will always point to the prefix for the right master even during migrations like in https://github.com/elastic/helm-charts/blob/master/elasticsearch/examples/migration/README.mdSorry for the slow reply. I'm currently on a work trip and only have a few quiet moments to sneak some work in. I'll be at Kubecon in Barcelona next week too so things will be slow for a little while. If you are at Kubecon come and say hi at the Elastic booth :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I had to wait 120 seconds per each master node.
Changed to
{{ template "masterService" . }}
!I'm not going to this Kubecon, but I hope to visit and say hi someday :)