Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice" #5223

Closed
AlekseySkovorodnikov opened this issue Aug 28, 2019 · 3 comments
Labels
co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/support Categorizes issue or PR as a support question.

Comments

@AlekseySkovorodnikov
Copy link

minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver= systemd:

**root@instance-280224:~# minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver= systemd
! There is a newer version of minikube available (v1.3.1). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v1.3.1

To disable this notification, run the following:
minikube config set WantUpdateNotification false

  • minikube v1.2.0 on linux (amd64)
  • Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
  • Configuring environment for Kubernetes v1.15.0 on Docker 18.06.0-ce
    • kubelet.cgroup-driver=
    • kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
  • Downloading kubeadm v1.15.0
  • Downloading kubelet v1.15.0
  • Pulling images ...
  • Launching Kubernetes ...

X Error starting cluster: cmd failed: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.13 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.13 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

**
root@instance-280224:~# minikube logs
==> dmesg <==
[Aug28 08:53] #2
[ +0.007689] #3
[ +0.093015] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +0.237618] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[ +0.486889] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
[ +0.075609] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[ +8.638365] sd 2:0:0:0: Power-on or device reset occurred
[ +0.004399] GPT:Primary header thinks Alt. header is not at the end of the disk.
[ +0.000002] GPT:4612095 != 209715199
[ +0.000000] GPT:Alternate GPT header not at the end of the disk.
[ +0.000001] GPT:4612095 != 209715199
[ +0.000000] GPT: Use GNU Parted to correct GPT errors.
[ +17.787705] new mount options do not match the existing superblock, will be ignored
[Aug28 10:32] kauditd_printk_skb: 5 callbacks suppressed

==> kernel <==
13:24:37 up 4:31, 1 user, load average: 0.21, 0.07, 0.01
Linux instance-280224 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

==> kubelet <==
-- Logs begin at Wed 2019-08-28 08:53:44 UTC, end at Wed 2019-08-28 13:24:37 UTC. --
Aug 28 13:24:34 instance-280224 kubelet[22643]: E0828 13:24:34.967548 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.067675 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.155255 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.167790 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.267913 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.355232 22643 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.368048 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.468179 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.473577 22643 controller.go:115] failed to ensure node lease exists, will retry in 7s, error: Get https://localhost:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.555255 22643 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.568309 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.668448 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.755175 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.768565 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.868686 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.955176 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:35 instance-280224 kubelet[22643]: E0828 13:24:35.968815 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.068965 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.155721 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.169200 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.269417 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.309830 22643 event.go:249] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/default/events/minikube.15bf180bb3249fea: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.355653 22643 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.369561 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.469691 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.555691 22643 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.569830 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.669948 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.755654 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.770062 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.870191 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.955730 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:36 instance-280224 kubelet[22643]: E0828 13:24:36.970405 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.070548 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.156336 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.170665 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.270782 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.356189 22643 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.370916 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.471151 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.556212 22643 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.571384 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.671508 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.756358 22643 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.771638 22643 kubelet.go:2248] node "minikube" not found
Aug 28 13:24:37 instance-280224 kubelet[22643]: I0828 13:24:37.799460 22643 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.805679 22643 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-controller-manager-minikube": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.805724 22643 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-controller-manager-minikube_kube-system(2c89c36506b770c7c17f0466f65d88ab)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-controller-manager-minikube": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.805884 22643 kuberuntime_manager.go:688] createPodSandbox for pod "kube-controller-manager-minikube_kube-system(2c89c36506b770c7c17f0466f65d88ab)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-controller-manager-minikube": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
Aug 28 13:24:37 instance-280224 kubelet[22643]: E0828 13:24:37.806008 22643 pod_workers.go:190] Error syncing pod 2c89c36506b770c7c17f0466f65d88ab ("kube-controller-manager-minikube_kube-system(2c89c36506b770c7c17f0466f65d88ab)"), skipping: failed to "CreatePodSandbox" for "kube-controller-manager-minikube_kube-system(2c89c36506b770c7c17f0466f65d88ab)" with CreatePodSandboxError: "CreatePodSandbox for pod "kube-controller-manager-minikube_kube-system(2c89c36506b770c7c17f0466f65d88ab)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-controller-manager-minikube": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice""
root@instance-280224:~#
**:

Ubuntu 18.04 LTS:

@tstromberg tstromberg changed the title cannot start minikube none: Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice" Aug 28, 2019
@tstromberg tstromberg added co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/support Categorizes issue or PR as a support question. labels Aug 28, 2019
@tstromberg
Copy link
Contributor

I'm not sure what the output is in reference to, but this output quirk is suspicious, though a possible red herring:

Configuring environment for Kubernetes v1.15.0 on Docker 18.06.0-ce
* kubelet.cgroup-driver=
* kubelet.resolv-conf=/run/systemd/resolve/resolv.conf

Notice how the cgroup-driver= has no value? It makes me wonder if there is a space being passed in after the equal sign that is fooling the flag parser.

@tstromberg
Copy link
Contributor

I'm closing this issue as it hasn't seen activity in awhile, and it's unclear if this issue still exists. If this issue does continue to exist in the most recent release of minikube, please feel free to re-open it.

I still suspect that there is a space or newline before "systemd" in your command-line options that is causing this.

Thank you for opening the issue!

@WoodProgrammer
Copy link

@tstromberg did you check the your cgroupdriver parameter of kubelet that located on your nodes ?

I think, If you update the docker daemon.json for different cgroupdriver you should update the kubelet parameters too .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

3 participants