Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot deploy Kubernetes 1.8.0 with Kubeadm 1.8.0 on Raspberry Pi #479

Closed
joedborg opened this issue Oct 3, 2017 · 9 comments
Closed

Cannot deploy Kubernetes 1.8.0 with Kubeadm 1.8.0 on Raspberry Pi #479

joedborg opened this issue Oct 3, 2017 · 9 comments

Comments

@joedborg
Copy link

joedborg commented Oct 3, 2017

Versions

kubeadm version (use kubeadm version): 1.8.0

Environment:

  • Kubernetes version (use kubectl version): 1.8.0
  • Cloud provider or hardware configuration: Raspberry Pi 3s
  • OS (e.g. from /etc/os-release): Debian Jessie
  • Kernel (e.g. uname -a): 4.9.35-v7+

What happened?

Kubeadm cannot bring up master.

What you expected to happen?

Kubeadm brings up a master instance.

How to reproduce it (as minimally and precisely as possible)?

sudo kubeadm init on a Raspberry Pi with Debian Jessie.

Anything else we need to know?

$ sudo kubeadm init --apiserver-advertise-address 192.168.0.47 
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.05.0-ce. Max validated version: 17.03
[preflight] WARNING: Running with swap on is not supported. Please disable swap or set kubelet's --fail-swap-on flag to false.
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [butch kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.47]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by that:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
        - There is no internet connection; so the kubelet can't pull the following control plane images:
                - gcr.io/google_containers/kube-apiserver-arm:v1.8.0
                - gcr.io/google_containers/kube-controller-manager-arm:v1.8.0
                - gcr.io/google_containers/kube-scheduler-arm:v1.8.0

You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Tue 2017-10-03 14:24:21 UTC; 308ms ago
     Docs: http://kubernetes.io/docs/
  Process: 10814 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
 Main PID: 10814 (code=exited, status=1/FAILURE)
@kad
Copy link
Member

kad commented Oct 3, 2017

Have you tried to disable swap or add parameter to kubelet ?

@joedborg
Copy link
Author

joedborg commented Oct 3, 2017

Hey kad, thanks for the tip - oddly preflight didn't seem to catch it, but it worked!

@joedborg joedborg closed this as completed Oct 3, 2017
@kad
Copy link
Member

kad commented Oct 3, 2017

Preflight actually produced warning about that.

@joedborg
Copy link
Author

joedborg commented Oct 3, 2017

@kad, yes you're right, sorry. Any reason why it didn't fail like it used to? Is that a new feature?

@kad
Copy link
Member

kad commented Oct 3, 2017

Idea behind warning in preflight check is such that some people can run kubelet while swap is still enabled, but that require adding extra flag in kubelet systemd drop-in.
You can see example in kubernetes/kubernetes#53333

@xinjt
Copy link

xinjt commented Apr 2, 2018

this is my situation
when i run
kubeadm init --apiserver-advertise-address=0.0.0.0 --kubernetes-version=1.10.0 --pod-network-cidr 10.244.0.0/16
i get err here:
[root@k8s-master kube]# kubeadm init --apiserver-advertise-address=0.0.0.0 --kubernetes-version=1.10.0 --pod-network-cidr 10.244.0.0/16
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:

  • The kubelet is not running
  • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  • Either there is no internet connection, or imagePullPolicy is set to "Never",
    so the kubelet cannot pull or find the following control plane images:
  • k8s.gcr.io/kube-apiserver-amd64:v1.10.0
  • k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
  • k8s.gcr.io/kube-scheduler-amd64:v1.10.0
  • k8s.gcr.io/etcd-amd64:3.1.12 (only if no external etcd endpoints are configured)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

  • 'systemctl status kubelet'
  • 'journalctl -xeu kubelet'
    couldn't initialize a Kubernetes cluster
    I thought i don not hava the images, so i docker pull all the images ,
    [root@k8s-master images]# docker images
    REPOSITORY TAG IMAGE ID CREATED SIZE
    k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 days ago 97 MB
    k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 days ago 225 MB
    k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 days ago 50.4 MB
    k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 days ago 148 MB
    k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 3 weeks ago 193 MB
    k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.7 db76ee297b85 5 months ago 42 MB
    k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.7 5d049a8c4eec 5 months ago 50.3 MB
    k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.7 5feec37454f4 5 months ago 41 MB
    k8s.gcr.io/pause-amd64 3.0 99e59f495ffa 23 months ago 747 kB
    ,and then i still got the err, so i run
    systemctl status kubelet
    [root@k8s-master images]# systemctl status kubelet -l
    ● kubelet.service - kubelet: The Kubernetes Node Agent
    Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: active (running) since Mon 2018-04-02 03:46:10 CST; 13min ago
    Docs: http://kubernetes.io/docs/
    Main PID: 12180 (kubelet)
    Memory: 33.2M
    CGroup: /system.slice/kubelet.service
    └─12180 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki

Apr 02 03:59:43 k8s-master kubelet[12180]: E0402 03:59:43.623741 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:44 k8s-master kubelet[12180]: E0402 03:59:44.622818 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:44 k8s-master kubelet[12180]: E0402 03:59:44.623521 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:44 k8s-master kubelet[12180]: E0402 03:59:44.624732 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:45 k8s-master kubelet[12180]: E0402 03:59:45.624634 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:45 k8s-master kubelet[12180]: E0402 03:59:45.625524 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:45 k8s-master kubelet[12180]: E0402 03:59:45.626470 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:46 k8s-master kubelet[12180]: E0402 03:59:46.625500 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:46 k8s-master kubelet[12180]: E0402 03:59:46.626475 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
Apr 02 03:59:46 k8s-master kubelet[12180]: E0402 03:59:46.627449 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused

journalctl -xeu kubelet
[root@k8s-master ~]# journalctl -xeu kubelet
Apr 02 04:00:04 k8s-master kubelet[12180]: E0402 04:00:04.643604 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:04 k8s-master kubelet[12180]: E0402 04:00:04.644545 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:04 k8s-master kubelet[12180]: E0402 04:00:04.645568 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:05 k8s-master kubelet[12180]: E0402 04:00:05.644756 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:05 k8s-master kubelet[12180]: E0402 04:00:05.645747 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:05 k8s-master kubelet[12180]: E0402 04:00:05.646843 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:06 k8s-master kubelet[12180]: E0402 04:00:06.645778 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:06 k8s-master kubelet[12180]: E0402 04:00:06.646779 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:06 k8s-master kubelet[12180]: E0402 04:00:06.647713 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:07 k8s-master kubelet[12180]: E0402 04:00:07.646802 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:07 k8s-master kubelet[12180]: E0402 04:00:07.647650 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:07 k8s-master kubelet[12180]: E0402 04:00:07.648548 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.290824 12180 event.go:209] Unable to write event: 'Patch https://192.168.1.6:6443/api/v1/namespaces/default/events/k8s-master.152
Apr 02 04:00:08 k8s-master kubelet[12180]: W0402 04:00:08.374383 12180 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.374645 12180 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker
Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.647516 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.648611 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.649669 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:09 k8s-master kubelet[12180]: E0402 04:00:09.648747 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:09 k8s-master kubelet[12180]: E0402 04:00:09.649597 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:09 k8s-master kubelet[12180]: E0402 04:00:09.650421 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:10 k8s-master kubelet[12180]: I0402 04:00:10.192038 12180 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Apr 02 04:00:10 k8s-master kubelet[12180]: I0402 04:00:10.196478 12180 kubelet_node_status.go:82] Attempting to register node k8s-master
Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.197073 12180 kubelet_node_status.go:106] Unable to register node "k8s-master" with API server: Post https://192.168.1.6:6443/api/
Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.649449 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.650511 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.651554 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:11 k8s-master kubelet[12180]: E0402 04:00:11.650552 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:11 k8s-master kubelet[12180]: E0402 04:00:11.651376 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:11 k8s-master kubelet[12180]: E0402 04:00:11.652206 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:12 k8s-master kubelet[12180]: E0402 04:00:12.651535 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:12 k8s-master kubelet[12180]: E0402 04:00:12.652445 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:12 k8s-master kubelet[12180]: E0402 04:00:12.653378 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.062758 12180 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "k8s-maste
Apr 02 04:00:13 k8s-master kubelet[12180]: W0402 04:00:13.376514 12180 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.376782 12180 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker
Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.652653 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:
Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.653439 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644
Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.654428 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.

what is the problem?

@fredmj
Copy link

fredmj commented Apr 2, 2018

@xinjt
I would say
0. Did you check all the steps of the official doc

  1. Did you check the status of the requiered ports according to the doc ?
firewalld-cmd --permanent --zone=public --add-port=6443
...
firewalld-cmd --reload
  1. Did you check the consitency of the cgroup driver used by docker and kubelet ?
grep -i Cgroup /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
docker info | grep Cgroup
  1. Did you check the selinux status?[¹]
setenforce 0
  1. Did you check the kernel module bridge-nf-call-iptables?
sysctl -n net.bridge.bridge-nf-call-iptables
1
  1. If you previously mad change in the kernel and/or service, did you correctly reload the rules?
systemctl daemon-reload
systcl --system
  1. As root, did you export the KUBECONFIG variable?
export KUBECONFIG=/etc/kubernetes/admin.conf

[¹]: Note that selinux should be disabled only for test purposes and should be obviously properly configured for production service.

@kad
Copy link
Member

kad commented Apr 2, 2018

@xinjt please open separate issue for your case. it is different from this one.
On a hint side, try to replace --apiserver-advertise-address=0.0.0.0 with explicit IP address.

@yipulash
Copy link

yipulash commented May 4, 2020

@joedborg Has this problem been solved? I encountered the same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants