-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automate stacked etcd into kubeadm join --control-plane workflow #1123
Comments
C seems like the simplest solution, but I'd love to hear more about A. I think we've really got a couple of use cases, stacked control plane nodes scale out to some number n nodes before etcd needs to have dedicated hosts, then it would be great if we had a path to get switch to external/dedicated hosts. I'd rule out B and D for now unless there is a compelling reason to add that complexity. |
From what I understand stacked etcd is an etcd instance like local etcd, with the difference that it listen on a public IP instead of 127.0.0.1 and it has a bunch of additional flags/certificate sans .
Does this sound reasonable to you?
Great suggestions, let's keep this in mind as well |
If I'm understanding this option, it would basically just extend the existing local etcd mode to support the additional flags, SANs, etc that the stacked deployment currently uses and is mainly about providing an upgrade path for existing local etcd-based deployments rather than providing HA support itself. Is that correct? That said, it would require config changes to make work, since we would need to expand the per-node configuration to include etcd config/overrides for things such as which IP, hostname, or SANs to use (if the defaults are not sufficient).
I don't like this option as it requires users to make a decision for HA/non-HA support before starting.
+1 for this, if there is a need to have a different number of etcd hosts vs controlplane instances, then external etcd should be used instead.
While I could see some value in this, the ability to use it would be limited since we don't provide a way to init a single etcd instance. I would expect that workflow to look like the following:
Where the entire etcd cluster is bootstrapped prior to bootstrapping control plane instances. Currently that workflow would require that kubeadm now have access to the client certificate to manipulate etcd, which is not currently the case. I'm not exactly sure how we are currently handling this for extending the control plane. The nice thing about this approach is that it would simplify the external etcd story as well, but I think it should be in addition to |
@detiber happy to see we are on that same page here!
Yes, but with the addition than when before adding etcd members we are going to call This will increase HA of the cluster, with the caveat that each API server use only the etcd endpoint of its own local etcd (instead of the list of etcd endpoints). So if an etcd member fails, all the control plane components on the same node will fail and everything will be switched to another control plane node. NB. This can improved up to a certain extent by passing to the API server the list of etcd endpoints known at the moment of join
Yes but I consider this changes less invasive than creating a whole new etcd type.
+1 |
@fabriziopandini For the issue with the control plane being fully dependent on the local etcd, there is an issue to track the lack of etcd auto sync support within Kubernetes itself: kubernetes/kubernetes#64742 |
/lifecycle active @detiber @chuckha @timothysc
- etcd
- --advertise-client-urls=https://10.10.10.11:2379
- --initial-advertise-peer-urls=https://10.10.10.11:2380
- --initial-cluster=master1=https://10.10.10.11:2380
- --listen-client-urls=https://127.0.0.1:2379, https://10.10.10.11:2379
- --listen-peer-urls=https://10.10.10.11:2380
....
- etcd
- --initial-cluster=master1=https://10.10.10.11:2380,master2=https://10.10.10.12:2380
- --initial-cluster-state=existing
.... So far so good. Now the tricky question. When kubeadm executes upgrades it will recreate the etcd manifest. Are there any settings I should take care of because I'm upgrading an etcd cluster instead of an etcd single instance? |
@detiber @chuckha @timothysc
so it doesn't matter which values I assign to Considering this my idea is to keep upgrade workflow "simple" and generate the new etcd manifest without compiling the Opinions? |
Last bit: |
/close |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stacked etcd is as a manual procedure described in https://kubernetes.io/docs/setup/independent/high-availability/.
However, kubeadm could automate the stacked etcd procedure as new step of the
kubeadm join --control-plane workflow
.Some design decision should be taken before implementing.
Considering the goal of keeping kubeadm simple and maintainable, IMO preferred options are A) and C)… wdyt?
cc @detiber @chuckha @timothysc
The text was updated successfully, but these errors were encountered: