Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot patch kubelet configuration #1270

Closed
crixo opened this issue Jan 19, 2020 · 9 comments
Closed

Cannot patch kubelet configuration #1270

crixo opened this issue Jan 19, 2020 · 9 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/external upstream bugs

Comments

@crixo
Copy link

crixo commented Jan 19, 2020

What happened:
KubeletConfiguration patches have not been applied

What you expected to happen:
connecting to the worker node on which I configured the patch

docker exec -it staticpod-worker /bin/bash

I'm expecting to see the value set within the patch into the config file

ps -ef | grep kubelet
cat /var/lib/kubelet/config.yaml | grep staticPodPath

that instead it still shows the default value: staticPodPath: /etc/kubernetes/manifests

How to reproduce it (as minimally and precisely as possible):
that's the cluster-config.yaml file I used to create the cluster

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  # kubeadmConfigPatches:
  #   - |
  #     apiVersion: kubelet.config.k8s.io/v1beta1
  #     kind: KubeletConfiguration
  #     staticPodPath: "/foo/kubelet.d/"
  kubeadmConfigPatchesJSON6902:
    - group: kubelet.config.k8s.io
      version: v1beta1
      kind: KubeletConfiguration
      patch: |
        - op: replace
          path: /staticPodPath
          value: /foo/kubelet.d/
  extraMounts:
    - readOnly: false
      #containerPath: /etc/kubernetes/manifests
      #hostPath: ./kind-data/kubelet.d
      containerPath: /foo
      hostPath: ./kind-data
      propagation: None
- role: worker

Anything else we need to know?:
If I do not use the kubeadmConfig patches (I tried kubeadmConfigPatches and kubeadmConfigPatchesJSON6902) and I map the staticPodPath default folder to my local folder containing the yaml for static pod, everything works as expected.

Environment:

  • kind version: (use kind version):
    kind v0.6.0 go1.13.4 darwin/amd64

  • Kubernetes version: (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:52:47Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-16T01:01:59Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

  • Docker version: (use docker info):
    Server Version: 19.03.5 [1/287]
    Storage Driver: overlay2
    Backing Filesystem: extfs
    Supports d_type: true
    Native Overlay Diff: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
    Volume: local
    Network: bridge host ipvlan macvlan null overlay
    Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
    runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
    init version: fec3683
    Security Options:
    seccomp
    Profile: default
    Kernel Version: 4.9.184-linuxkit
    Operating System: Docker Desktop
    OSType: linux
    Architecture: x86_64
    CPUs: 2
    Total Memory: 7.787GiB
    Name: docker-desktop

  • OS (e.g. from /etc/os-release):
    MacOS High Sierra Version 10.13.6

@crixo crixo added the kind/bug Categorizes issue or PR as related to a bug. label Jan 19, 2020
@BenTheElder
Copy link
Member

/assign

@BenTheElder
Copy link
Member

so I can confirm that kind is patching this config and passing it to kubeadm, it's possible that kubeadm simply does not respect kubelet config during join :/

@BenTheElder
Copy link
Member

BenTheElder commented Jan 22, 2020

it looks like you have to use the kubelet flag in a kubeam JoinConfiguration instead
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#JoinConfiguration

I'm reaching out in #kubeadm and will probably follow up with an upstream issue, you can instead do this in the meantime:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        "pod-manifest-path": "/foo/kubelet.d"

@BenTheElder
Copy link
Member

I can at least confirm that kubelet picks up the flag this way:

$ docker exec kind-worker ps aux | grep kubelet
root 575 21.2 0.0 1641952 60972 ? Ssl 21:22 0:01 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/run/containerd/containerd.sock --fail-swap-on=false --node-ip=192.168.9.4 --pod-manifest-path=/foo/kubelet.d --fail-swap-on=false

IIRC the flags do take precedence over the config so this should work even if it's not great.

@BenTheElder
Copy link
Member

tracking upstream in kubernetes/kubeadm#2008, AFAICT kind will patch the config and pass it to kubeadm but kubelet component config is ignored. I didn't need to do this previously and am a bit surprised by the behavior.

Depending on what upstream kubeadm decides regarding this we'll follow up with work here to ensure it's supported in KIND. In the meantime I think you can still accomplish all config via the kubeletExtraFlags 😬

@BenTheElder BenTheElder added the kind/external upstream bugs label Jan 22, 2020
@BenTheElder
Copy link
Member

moved to kubernetes/kubeadm#2008

@BenTheElder
Copy link
Member

This is in fact kubernetes/kubeadm#1682, there's an ongoing KEP discussion related to resolving this 😬

@crixo
Copy link
Author

crixo commented Jan 23, 2020

it looks like you have to use the kubelet flag in a kubeam JoinConfiguration instead
https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#JoinConfiguration

I'm reaching out in #kubeadm and will probably follow up with an upstream issue, you can instead do this in the meantime:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        "pod-manifest-path": "/foo/kubelet.d"

Thanks a lot for your hint and the effort you put on this awesome project. I really appreciate it.

@BenTheElder
Copy link
Member

thanks for the feedback :-)

hopefully we'll have a better answer for this one in the future, discussing some more w/ upstream.

if we can't get a solution upstream, we may be able to work around that in kind by applying the patches after the fact directly to the generated file on disk ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/external upstream bugs
Projects
None yet
Development

No branches or pull requests

2 participants