Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding per node KubeadmConfiguration is not being apply #1424

Closed
qinqon opened this issue Mar 19, 2020 · 7 comments
Closed

Adding per node KubeadmConfiguration is not being apply #1424

qinqon opened this issue Mar 19, 2020 · 7 comments
Labels
kind/external upstream bugs kind/support Categorizes issue or PR as a support question.

Comments

@qinqon
Copy link
Contributor

qinqon commented Mar 19, 2020

What happened:

Starting a kind 0.7.0 k8s 1.17.0 cluster with the following configuration:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry:5000"]
    endpoint = ["http://registry:5000"]
networking:
  ipFamily: ipv6
  apiServerAddress: "::1"
nodes:
- role: control-plane
- role: worker
  kubeadmConfigPatches:
    - |
      kind: KubeletConfiguration
      apiVersion: kubelet.config.k8s.io/v1beta1
      featureGates:
        CPUManager: true
      cpuManagerPolicy: static

What you expected to happen:

It should configure the featureGates and cpuManagerPolicy properties at kubelet config but
it has the following:

docker exec  kind-1.17.0-worker cat /var/lib/kubelet/config.yaml
address: '::'
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- fd00:10:96::a
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: '::'
healthzPort: 10248
httpCheckFrequency: 0s
imageGCHighThresholdPercent: 100
imageMinimumGCAge: 0s
kind: KubeletConfiguration
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

How to reproduce it (as minimally and precisely as possible):

cat << EOF >> config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry:5000"]
    endpoint = ["http://registry:5000"]
networking:
  ipFamily: ipv6
  apiServerAddress: "::1"
nodes:
- role: control-plane
- role: worker
  kubeadmConfigPatches:
    - |
      kind: KubeletConfiguration
      apiVersion: kubelet.config.k8s.io/v1beta1
      featureGates:
        CPUManager: true
      cpuManagerPolicy: static
EOF
kind create cluster --config=config.yaml --image=kindest/node:v1.17.0

Anything else we need to know?:

Environment:

fedora31 x86_64

  • kind version: kind v0.7.0 go1.13.6 linux/amd64
  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:08:27Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version:
Containers: 5
 Running: 3
 Paused: 0
 Stopped: 2
Images: 11
Server Version: 18.09.8
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: /usr/libexec/docker/docker-init
containerd version: 
runc version: ce97911e3cd37a5ce3ef98f7f1d4add21a3ac162
init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Security Options:
 seccomp
  Profile: default
 selinux
Kernel Version: 5.4.12-200.fc31.x86_64
Operating System: Fedora 31 (Thirty One)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 70.77GiB
Name: modi05.eng.lab.tlv.redhat.com
ID: S6DE:FJKE:RQTW:ERIF:4DXA:A26I:YUMO:BPCM:4VLC:54BU:6BMI:UMNQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
 registry:5000
 127.0.0.0/8
Live Restore Enabled: true
  • OS:
NAME=Fedora
VERSION="31 (Thirty One)"
ID=fedora
VERSION_ID=31
VERSION_CODENAME=""
PLATFORM_ID="platform:f31"
PRETTY_NAME="Fedora 31 (Thirty One)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:31"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=31
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=31
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
@qinqon qinqon added the kind/bug Categorizes issue or PR as related to a bug. label Mar 19, 2020
@qinqon
Copy link
Contributor Author

qinqon commented Mar 19, 2020

could be this ?
kubernetes/kubeadm#1614

@qinqon
Copy link
Contributor Author

qinqon commented Mar 19, 2020

kubeadm.conf is correctly generated so looks like a kubeadm bug ? it's not rendering the CPUManager stuff.

@neolit123
Copy link
Member

neolit123 commented Mar 19, 2020

sadly kubeadm does not support KubeletConfiguration on joining nodes.
it wrongly assumes that nodes are replicas, which is based on a design principle of the kubelet.
the kubelet assumes that a KubeletConfiguration is cluster-wide even if it technically is an instance specific file on a Node and a file that even has instance specific fields.

we are trying to break away from this wrong design principle:
kubernetes/kubeadm#1682

in the meantime your workaround is the following:

  • start the kind cluster without the CPU config
  • docker exec on the worker node, modify the /var/lib/kubelet/config.yaml to apply your settings
  • restart the kubelet with systemctl restart kubelet

@neolit123
Copy link
Member

/remove-kind bug
/triage support

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Mar 19, 2020
@qinqon
Copy link
Contributor Author

qinqon commented Mar 19, 2020

@neolit123 thanks for quick answer, we will try to hack it.

@neolit123
Copy link
Member

/kind external

duplicate of:
#1270

/close

@k8s-ci-robot k8s-ci-robot added the kind/external upstream bugs label Mar 19, 2020
@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

/kind external

duplicate of:
#1270

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/external upstream bugs kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

3 participants