-
Notifications
You must be signed in to change notification settings - Fork 466
upgrade questions #417
Comments
@bfallik awesome questions. I can only speak with regard to AWS clusters. There are no immediate plans to support upgrades on Vagrant single/multi node clusters. The first item on #340 (our production readiness checklist) is a non-disruptive cluster upgrade path. That is dependent on having a separate etcd cluster and a mechanism for decommissioning nodes. As we will be replacing AWS EC2 instances to enact the update, this will encompass the underlying CoreOS as well as Kuberentes components. At that point, we will be able to generate updated assets with |
@colhom equally awesome answers. =) Everything you wrote makes sense w.r.t. AWS clusters. Is the idea that new releases of this tool would include new releases of kube-aws, which would generate stack updates for new versions of CoreOS and also contain updates for the kubelet and other kubernetes processes (e.g. K8S_VER)? |
@bfallik just a ping- this is still a priority. Are you interested in helping out with some contributions? |
@colhom yes with the proper direction. I've been actively following this project eager to help out but it's not always clear to me the direction coreos and/or the maintainers want to take. I don't want to craft a PR that doesn't fulfill all the requirements and misses some key design point. |
Regarding coreos-kubernetes upgrade in AWS, is there a way to upgrade kubernetes nodes without node replacement? According to doc (https://coreos.com/kubernetes/docs/latest/kubernetes-upgrade.html) it is possible to change the versions manually on nodes, I tried it and it works, in this case I don't have to bounce the pods. But the problem is with AWS userdata (#cloud-config). When that host gets rebooted (for whatever reason) old kubelet will be used because after reboot config files are overwritten with config from userdata (#cloud-config). Also there is no way to change the userdata when instance is running. |
@tarvip at this point i think self-hosted kubernetes would be the solution to your ills. But this is not quite ready yet. Also, kube-aws development is happening at it's own separate repo now: https://github.com/coreos/kube-aws |
I have a few questions related to coreos-kubernetes upgrades.
First, locally I've been testing with the single-node Vagrant config. The Vagrantfile specifies the CoreOS alpha release stream but disables the update system, presumably because there's no non-disruptive way to upgrade a cluster of size 1. Is the expectation that developers manually invoke the update-system when they want to force an upgrade of CoreOS? If so, is that documented anywhere?
Second, as new releases of this repo are announced, how are existing clusters expected to upgrade? I'm wondering for both single-node testing as well as AWS clusters. Is there a plan for an upgrade operation, or is the idea that most/all of configuration captured in core-kubernetes is just for initial bootstrapping? For instance, the 0.6.1. release says:
Thanks in advance!
The text was updated successfully, but these errors were encountered: