Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow usage of KubeletConfiguration next to JoinConfiguration on kubeadm join #1682

Closed
4 tasks done
fabriziopandini opened this issue Jul 22, 2019 · 36 comments
Closed
4 tasks done
Assignees
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@fabriziopandini fabriziopandini added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. kind/feature Categorizes issue or PR as related to a new feature. labels Jul 22, 2019
@fabriziopandini
Copy link
Member Author

the tricky part here is how to preserve node-specific settings during upgrades...

@rosti
Copy link

rosti commented Jul 22, 2019

Allowing for customization on a per-node basis - yes, allowing for a replacement - probably no.
Component configs are shared on a per-cluster basis and are stored in config maps.
Allowing for them to be patched with node-local settings is a must have, but allowing for their total replacement is probably not desired.

@fabriziopandini
Copy link
Member Author

fabriziopandini commented Jul 22, 2019

I'm +1 on merging this with the "Advanced configurations with kubeadm (Kustomize)" effort (customization on a per-node basis)

As an alternative, if we prefer to keep the scope of kustomize limited for the first iterations, it should be provided a way to specify the ConfigMap we should read from, and keep track of this with a new node annotation.

@rosti
Copy link

rosti commented Jul 24, 2019

My bet is to push in the Kustomize direction. I don't think, that there are many users who push for such kind of feature and, therefore, having the delay (necessary for Kustomize) is acceptable.

@neolit123 neolit123 added this to the v1.17 milestone Sep 2, 2019
@neolit123 neolit123 added the kind/design Categorizes issue or PR as related to design. label Oct 13, 2019
@neolit123 neolit123 modified the milestones: v1.17, v1.18 Nov 9, 2019
@BenTheElder
Copy link
Member

Without this you can customize kubelet per node but it requires using flags which is problematic given that many are deprecated.

@fabriziopandini
Copy link
Member Author

Rif kubernetes/enhancements#1439

@stealthybox
Copy link
Member

the tricky part here is how to preserve node-specific settings during upgrades...

^ the NodeGroup ConfigMap thing makes sense for persistent configurations.
We would need a way to specify a map of NodeGroups to KubeletConfigurations when doing kubeadm init --config <>

I don't think that lack of persistent config should prevent people from overriding the kubelet config at the local level on Join/Upgrade.

If a user can distribute a JoinConfiguration to every node, they can manage their KubeletConfigurations.

Supporting this is as easy as respecting the KubeletConfiguration GVK when it's passed with --config to kubeadm join and/or kubeadm upgrade node.
This would be superior to the limited usage we have now, where only one KubeletConfiguration can live in the cluster and users have to do weird things.

@fabriziopandini
Copy link
Member Author

I'm -1 to the --config options because it makes complicated all the automation around join/upgrades.
it also (re)opens up a can of worms wrt to "change the cluster" for upgrades.
IMO

  • we should avoid to require users to move files around nodes
  • ideally, we should promote solutions driving users toward standardisation of nodes, like t-shirt sizes for nodes
  • upgrade node should have no parameters and leverage the info stored at join time.
    Also we should consider the full story, including init join upgrade but also reset and possibly change the cluster with the kubeadm operator

@BenTheElder
Copy link
Member

It continues to confuse kind users that this does not work.

Being able to patch Kubelet flags per node only is a terrible place to be in with things moving to component config.

I don't think "t-shirt sizes" is relevant at all, I could pick a different "t-shirt size" for every single node...

kubeadm should enable users to do what they need.

@neolit123
Copy link
Member

might be a good idea to document this until the solution is available.
we are not seeing this issue in the kubeadm repository, so maybe it should be in the kind troubleshooting guide temporary.

@BenTheElder
Copy link
Member

BenTheElder commented Mar 19, 2020

how are other kubeadm users doing e.g. node labels? what about cluster API machinesets?

I would guess almost everything is relying on patching the flags, which is not a viable path forward as the rest of the project pushes component config.

If we're against supporting this in kubeadm I'm inclined to simply provide higher level kubelet config patching and steer users this way, there are many reasonable things to want to configure more granularly than cluster-wide.

@neolit123
Copy link
Member

neolit123 commented Mar 19, 2020

If we're against supporting this in kubeadm I'm inclined to simply provide higher level kubelet config patching and steer users this way, there are many reasonable things to want to configure more granularly than cluster-wide.

for the time being the kubeadm developers are busy with higher priority tasks.
in fact, the last few weeks "kubeadm developers" == me, mostly.

how are other kubeadm users doing e.g. node labels? what about cluster API machinesets?

AFAIK, --node-labels and --node-ip are CLI flags only and are not present in the KubeletConfiguration (because they are not "cluster-wide"). like i've mentioned today in the kind ticket, next to kubeadm supporting this per node, the kubelet needs a bit of a redesign.

for workers, nowadays users (including Cluster API) mostly pass custom flags in JoinConfiguration -> ... kubeletExtraArgs.

@BenTheElder
Copy link
Member

BenTheElder commented Mar 19, 2020 via email

@neolit123
Copy link
Member

neolit123 commented Mar 19, 2020

people like mtaufen and rosti in WG Component Standard are working on instance specific component config.

maybe we can enable the KubeletConfiguration next to JoinConfiguration for kubeadm join, but the argument in the above comments is that we have no idea how to upgrade this existing config with the one that comes from the cluster during kubeadm upgrade.

@pytimer
Copy link

pytimer commented Apr 13, 2020

the tricky part here is how to preserve node-specific settings during upgrades...

I use a configmap named kubelet-config- as kubelet spec.configSource.configMap for every node, because every node maybe have different configuration in my cluster. I'm not sure if kubeadm can create a configmap that include KubeletConfiguration and custom other kubelet configuration for each node, is it ok?

@neolit123
Copy link
Member

k/k PR is up:
kubernetes/kubernetes#110405

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. labels Feb 19, 2023
@SataQiu
Copy link
Member

SataQiu commented Feb 20, 2023

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 20, 2023
@neolit123
Copy link
Member

Q for 1.28, with a lazy consensus until 1.28 releases.

kubeadm patch support for kubeletconfiguration

this feels sufficient support for this functionality for the time being. what do folks think? proposing to close.

in the future we could allow the user to pass a KubeletConfiguration on join, but under the hood it would act like applying a marshaled patch.

it also has one weird UX side effect. if the user passes KubeletConfiguration on init that would be the global config and they can also apply a patch on that init node with the kubeletconfiguration target making a "mod" for that node.
subsequent nodes can also apply "mods" with patches over the global config. this suggests that patches are the suggested way to manage the node local kubelet config.

if we add the KubeletConfiguration on join this means the user must use patches only to "mod" the init node.
it also duplicates the patches functionality at this point.

i recall @fabriziopandini had one idea to implement a "node selector" for a set of KubeletConfiguration objects passed on init. this solves the above problem, but i think one problem might rise in cases where nodes are added after a time and the demands for a diff from the global config has changed. also future nodes might have different names that are not tracked.

my vote goes for us to keep the patches as the main functionality for node specific kubelet config.
it comes with UX that is not so easy and may have upgrade problems once the kubelet config moves to version Foo in the future. but it seems manageable.

@neolit123 neolit123 modified the milestones: v1.27, v1.28 Apr 17, 2023
@BenTheElder
Copy link
Member

[This should not be a blocker]

Noting that this may have some weird side effects for kind. Currently we pass in an identical yaml document bundle with all objects on all nodes, however users may also supply per-node patches.

I intended to switch us to patching cluster components vs kubeadm config differently and updating the API to reflect this but it hasn't happened yet. If this is happening in 1.28 I may want to prioritize reworking kubeadm config patches in kind.

Am I reading correctly that this would be in 1.29 at the earliest?

@neolit123
Copy link
Member

Am I reading correctly that this would be in 1.29 at the earliest?

my comment about the lazy consensus was about potentially closing this ticket once 1.28 releases and if we haven't decided if we really want a KubeletConfiguration on join.

I intended to switch us to patching cluster components vs kubeadm config differently and updating the API to reflect this but it hasn't happened yet. If this is happening in 1.28 I may want to prioritize reworking kubeadm config patches in kind.

i don't recall how kubeadm patches work in kind currently. i think it's similar to CAPI.
CAPI allow users to pass such a config snippet in InitConfiguration/JoinConfiguration

patches:
   directory: /foo/bar

then /foo/bar can be mounted on the nodes and it has the patch files.
it is a bit manual, but seems OK to me as long as it's documented.

@BenTheElder
Copy link
Member

i don't recall how kubeadm patches work in kind currently.

It's not the same as CAPI because it's older than kubeadm supporting built-in patching. KIND allows inline snippets in the kind config and then patches a manifest contianing Init/Join/Kubelet/KubeProxy.

I read this backwards, as deciding by 1.28 release if you want to go forward and add this.

Not adding this would not create any additional confusion then, what KIND probably should be doing is letting user target the kubeadm built in patches, or applying the existing kind patching routines to the generated kubelet config on each node instead of the input to join. Currently attempting to patch kubelet config on a secondary node doesn't work how you might expect with the kubeadmConfigPatches support in kind.

@chendave
Copy link
Member

I am incline to exclude from v1beta4 feature list since this is not decided yet, patch is the way to support the node specific kubelet config for the time being.

Folks, any more comments for this feature request?

@neolit123 neolit123 modified the milestones: v1.28, v1.29 Jul 21, 2023
@neolit123
Copy link
Member

#1682 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests