Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana does not support rolling upgrades #2049

Closed
pebrc opened this issue Oct 23, 2019 · 3 comments · Fixed by #2137
Closed

Kibana does not support rolling upgrades #2049

pebrc opened this issue Oct 23, 2019 · 3 comments · Fixed by #2137
Assignees
Labels
>bug Something isn't working

Comments

@pebrc
Copy link
Collaborator

pebrc commented Oct 23, 2019

Kibana currently does not support rolling upgrades.

The expected upgrade path for Kibana is to stop all Kibanas of the previous version, then start up the new version. We currently use a Deployment which does violate this invariant.

https://www.elastic.co/guide/en/kibana/current/upgrade.html

Running into trouble with saved object migrations is only one effect of having multiple versions of Kibana running at the same time, see https://www.elastic.co/guide/en/kibana/current/upgrade-migrations.html

@pebrc pebrc added the >bug Something isn't working label Oct 23, 2019
@pebrc
Copy link
Collaborator Author

pebrc commented Oct 24, 2019

Some implementation ideas discussed offline:

  • distinguish between updates and upgrades: the former being a configuration change while staying on the same version and the latter being a version upgrade
    • for the update case we could stick with the current approach
    • for the version upgrade we would need to implement a strategy that completely halts all Kibana processes before rolling out the new version
      • @anyasabo and @sebgl suggested to scale down the deployment and wait for all pods to be terminated before scaling it up again based on the new spec.
      • we could compare a version annotation on the deployment or use the image attribute to see
        • if the version is less than the expected version then scale down the deployment
        • if the version is less than the expected version and the deployment has 0 replicas check if it is actually scaled down (tbd is status enough?)
        • if the version is less than the expected version and the deployment is scaled down update the deployment template, update the version annotation and scale up again

@thbkrkr
Copy link
Contributor

thbkrkr commented Oct 28, 2019

Another idea for the version upgrade to reuse existing k8s mecanisms: set the rolling update strategy of the deployment with MaxUnavailable to 100% and MaxSurge to 0. I ran a quick test and it looks ok. All existing containers are killed before new ones are created which causes a downtime of about 30s.

@sebgl
Copy link
Contributor

sebgl commented Oct 29, 2019

Another idea for the version upgrade to reuse existing k8s mecanisms: set the rolling update strategy of the deployment with MaxUnavailable to 100% and MaxSurge to 0

Is there still a risk we may run 2 different versions of Kibana at the same time? Example:

  • Kibana deployment with 2 replicas
  • 1 replica is down for whatever reason
  • User updates Kibana version
  • Second replica is recreated in the newer version while the first replica still has the old version

Maybe that's me misunderstanding the Deployment rollingUpdate though.

I see in the docs that we can set .spec.strategy.type=Recreate :

All existing Pods are killed before new ones are created

It seems to be exactly what we need here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants