Skip to content
This repository has been archived by the owner on Sep 4, 2021. It is now read-only.

upgrade questions #417

Open
bfallik opened this issue Apr 16, 2016 · 6 comments
Open

upgrade questions #417

bfallik opened this issue Apr 16, 2016 · 6 comments

Comments

@bfallik
Copy link
Contributor

bfallik commented Apr 16, 2016

I have a few questions related to coreos-kubernetes upgrades.

First, locally I've been testing with the single-node Vagrant config. The Vagrantfile specifies the CoreOS alpha release stream but disables the update system, presumably because there's no non-disruptive way to upgrade a cluster of size 1. Is the expectation that developers manually invoke the update-system when they want to force an upgrade of CoreOS? If so, is that documented anywhere?

Second, as new releases of this repo are announced, how are existing clusters expected to upgrade? I'm wondering for both single-node testing as well as AWS clusters. Is there a plan for an upgrade operation, or is the idea that most/all of configuration captured in core-kubernetes is just for initial bootstrapping? For instance, the 0.6.1. release says:

This release provisions Kubernetes v1.2.2

Notable changes include:

Heapster addon is now a deployment instead of a replication controller.
Heapster now automatically scales its resources based on the number of nodes in a cluster
Route53 host records can be created automatically
Improved validation and UX

but it's not clear how I would upgrade a cluster to 1.2.2 or switch Heapster to a Deployment instead of RC, besides manually making edits to the startup scripts already deployed into CoreOS.

Thanks in advance!

@colhom
Copy link
Contributor

colhom commented Apr 18, 2016

@bfallik awesome questions.

I can only speak with regard to AWS clusters. There are no immediate plans to support upgrades on Vagrant single/multi node clusters.

The first item on #340 (our production readiness checklist) is a non-disruptive cluster upgrade path. That is dependent on having a separate etcd cluster and a mechanism for decommissioning nodes.

As we will be replacing AWS EC2 instances to enact the update, this will encompass the underlying CoreOS as well as Kuberentes components.

At that point, we will be able to generate updated assets with kube-aws and leverage cloudformation stack updates to propagate updates to Cloudformation stack in a non-disruptive manner.

@bfallik
Copy link
Contributor Author

bfallik commented Apr 19, 2016

@colhom equally awesome answers. =)

Everything you wrote makes sense w.r.t. AWS clusters. Is the idea that new releases of this tool would include new releases of kube-aws, which would generate stack updates for new versions of CoreOS and also contain updates for the kubelet and other kubernetes processes (e.g. K8S_VER)?

@colhom
Copy link
Contributor

colhom commented Jul 5, 2016

@bfallik just a ping- this is still a priority. Are you interested in helping out with some contributions?

@bfallik
Copy link
Contributor Author

bfallik commented Jul 5, 2016

@colhom yes with the proper direction. I've been actively following this project eager to help out but it's not always clear to me the direction coreos and/or the maintainers want to take. I don't want to craft a PR that doesn't fulfill all the requirements and misses some key design point.

@tarvip
Copy link

tarvip commented Nov 18, 2016

Regarding coreos-kubernetes upgrade in AWS, is there a way to upgrade kubernetes nodes without node replacement?
At the moment when new patch version comes out I'm creating new launch configuration using new version, adding new nodes to cluster, marking old nodes unschedulable and bouncing all pods on old nodes, this relocates pods to new nodes. But this is a bit annoying procedure.

According to doc (https://coreos.com/kubernetes/docs/latest/kubernetes-upgrade.html) it is possible to change the versions manually on nodes, I tried it and it works, in this case I don't have to bounce the pods. But the problem is with AWS userdata (#cloud-config). When that host gets rebooted (for whatever reason) old kubelet will be used because after reboot config files are overwritten with config from userdata (#cloud-config). Also there is no way to change the userdata when instance is running.

@pieterlange
Copy link

@tarvip at this point i think self-hosted kubernetes would be the solution to your ills. But this is not quite ready yet.

Also, kube-aws development is happening at it's own separate repo now: https://github.com/coreos/kube-aws

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants