You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But it is a second attenpt of launching that command. In the first time I get same text:
root@instance-274528:~# minikube start --vm-driver=none --memory 4096 --cpus=2
* minikube v1.3.1 on Ubuntu 18.04
* Running on localhost (CPUs=4, Memory=7976MB, Disk=99067MB) ...
* OS release is Ubuntu 18.04.3 LTS
* Preparing Kubernetes v1.15.2 on Docker 19.03.1 ...
- kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Downloading kubeadm v1.15.2
* Downloading kubelet v1.15.2
* Pulling images ...
* Launching Kubernetes ...
^[ ^[q*
X Error starting cluster: cmd failed: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.24 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.24 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1
*
Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.207357265Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215422527Z" level=warning msg="Your kernel does not support swap memory limit"
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215449758Z" level=warning msg="Your kernel does not support cgroup rt period"
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215458338Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.215611788Z" level=info msg="Loading containers: start."
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.317260686Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 19 14:52:49 instance-274528 dockerd[20383]: time="2019-08-19T14:52:49.446644119Z" level=info msg="Loading containers: done."
Aug 20 09:27:30 instance-274528 kubelet[22223]: I0820 09:27:30.842597 22223 plugins.go:103] No cloud provider specified.
Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015031 22223 server.go:661] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.015360 22223 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.018934 22223 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.019131 22223 client.go:104] Start docker client with request timeout=2m0s
Aug 20 09:27:31 instance-274528 kubelet[22223]: W0820 09:27:31.040558 22223 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.040607 22223 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Aug 20 09:27:31 instance-274528 kubelet[22223]: W0820 09:27:31.040783 22223 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 09:27:31 instance-274528 kubelet[22223]: I0820 09:27:31.044724 22223 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Aug 20 09:27:31 instance-274528 kubelet[22223]: F0820 09:27:31.066052 22223 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Aug 20 09:27:31 instance-274528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Aug 20 09:27:31 instance-274528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
**:
kubelet[22223]: F0820 09:27:31.066052 22223 server.go:273] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
This issue appears to be a duplicate of #4172, so I will close this one so that we may centralize the content relating to the issue. If you feel that this issue is not in fact a duplicate, please feel free to re-open it. If you have additional information to share, please add it to the new issue.
tstromberg
changed the title
cannot launch minikube
none: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
Aug 20, 2019
Wonder if Docker has started to configure "systemd" as the default, I seem to recall that it used to start out with "cgroupfs" until you explicitly configured /etc/docker/daemon.json to use the other one... ?
There seems to be more of these reports from Ubuntu lately, before it was just RedHat that used to do some custom configuration (like switch the cgroup driver and the "aufs" storage driver, for instance)
Of course, Docker has the opposite opinion from Kubernetes:
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
minikube start --vm-driver=none --memory 4096 --cpus=2:
**
root@instance-274528:~# minikube start --vm-driver=none --memory 4096 --cpus=2
X Error restarting cluster: waiting for apiserver: timed out waiting for the condition
*
:
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
output: [init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[WARNING Hostname]: hostname "minikube" could not be reached
[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs/"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.24 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.1.24 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
: running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--data-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1
*
**root@instance-274528:~# minikube logs
**:
I try to install QlikSense software on Kubernetes cluster like in that manual:
http://www.bardess.com/qlik-sense-on-kubernetes-a-beginners-guide/
Ubuntu 18.04.2 LTS:
The text was updated successfully, but these errors were encountered: