Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

node(s) had untolerated taint node.kubernetes.io/not-ready #14924

Closed
sfl0r3nz05 opened this issue Sep 8, 2022 · 12 comments
Closed

node(s) had untolerated taint node.kubernetes.io/not-ready #14924

sfl0r3nz05 opened this issue Sep 8, 2022 · 12 comments
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/support Categorizes issue or PR as a support question.

Comments

@sfl0r3nz05
Copy link

What Happened?

I have deployed minikube cluster using none driver:

minikube start --driver=none

Pod status is Pending all the time:

NAME                                  READY   STATUS    RESTARTS   AGE
pod/hello-minikube-55cfcd4f75-92558   0/1     Pending   0          7s

Describe command for the pod kubectl describe pod hello-minikube-55cfcd4f75-92558 leave me the message:

  • 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

Describe command for the node: kubectl describe node ip-172-31-29-172

ubuntu@ip-172-31-29-172:~$ kubectl describe node ip-172-31-29-172
Name:               ip-172-31-29-172
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-172-31-29-172
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/primary=true
                    minikube.k8s.io/updated_at=2022_09_08T10_54_55_0700
                    minikube.k8s.io/version=v1.26.1
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 08 Sep 2022 10:54:52 +0000
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ip-172-31-29-172
  AcquireTime:     <unset>
  RenewTime:       Thu, 08 Sep 2022 11:09:55 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 08 Sep 2022 11:05:20 +0000   Thu, 08 Sep 2022 10:54:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 08 Sep 2022 11:05:20 +0000   Thu, 08 Sep 2022 10:54:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 08 Sep 2022 11:05:20 +0000   Thu, 08 Sep 2022 10:54:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 08 Sep 2022 11:05:20 +0000   Thu, 08 Sep 2022 10:54:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  172.31.29.172
  Hostname:    ip-172-31-29-172
Capacity:
  cpu:                2
  ephemeral-storage:  101445540Ki
  hugepages-2Mi:      0
  memory:             4016848Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  101445540Ki
  hugepages-2Mi:      0
  memory:             4016848Ki
  pods:               110
System Info:
  Machine ID:                 5aa8e18df492498e8e5b3d7d8d6a660e
  System UUID:                ec24b325-4c5f-a070-928b-a71ffb4dfac4
  Boot ID:                    4b3b31d2-00cb-4fb3-a23c-3c5046fd53b5
  Kernel Version:             5.13.0-1029-aws
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.17
  Kubelet Version:            v1.24.3
  Kube-Proxy Version:         v1.24.3
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-ip-172-31-29-172                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
  kube-system                 kube-apiserver-ip-172-31-29-172             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                 kube-controller-manager-ip-172-31-29-172    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
  kube-system                 kube-proxy-djvhh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
  kube-system                 kube-scheduler-ip-172-31-29-172             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             100Mi (2%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From             Message
  ----    ------                   ----               ----             -------
  Normal  Starting                 14m                kube-proxy
  Normal  Starting                 15m                kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  15m (x3 over 15m)  kubelet          Node ip-172-31-29-172 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    15m (x3 over 15m)  kubelet          Node ip-172-31-29-172 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     15m (x3 over 15m)  kubelet          Node ip-172-31-29-172 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
  Normal  Starting                 15m                kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  15m                kubelet          Node ip-172-31-29-172 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    15m                kubelet          Node ip-172-31-29-172 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     15m                kubelet          Node ip-172-31-29-172 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
  Normal  RegisteredNode           14m                node-controller  Node ip-172-31-29-172 event: Registered Node ip-172-31-29-172 in Controller

Attach the log file

log.txt

Operating System

No response

Driver

No response

@sfl0r3nz05
Copy link
Author

[Update]

I have untainted the node: kubectl taint node ip-172-31-29-172 node.kubernetes.io/not-ready:NoSchedule-

Now, the status of the pod has changed to ContainerCreating. However, I receive this new error message in log file:

Sep 08 13:53:19 ip-172-31-29-172 kubelet[31028]: E0908 13:53:19.231191   31028 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=fal
se reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-6d4b75cb6d-bgfkw" podUID=33e8edfd-0492-4122-83d1-61b6f5e96f2a

@sfl0r3nz05
Copy link
Author

sfl0r3nz05 commented Sep 8, 2022

[Update]

I have updated minikube to version 1.24.0 and now everything works fine without driver: minikube start --driver=none.

When the service is exposed using NodePort (kubectl expose pod hello-minikube --type=NodePort) the pod runs properly:

ubuntu@ip-172-31-29-172:~$ kubectl get all
NAME                 READY   STATUS    RESTARTS   AGE
pod/hello-minikube   1/1     Running   0          14s

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/hello-minikube   NodePort    10.110.164.198   <none>        8080:31173/TCP   3s
service/kubernetes       ClusterIP   10.96.0.1        <none>        443/TCP          49s

... same when the service is exposed using LoadBalancer (kubectl expose deployment hello-minikube --type=LoadBalancer --port=8080) the deployment runs properly:

ubuntu@ip-172-31-29-172:~$ kubectl get all
NAME                                  READY   STATUS    RESTARTS   AGE
pod/hello-minikube-6d4df66d87-2mmsl   1/1     Running   0          9s

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/hello-minikube   LoadBalancer   10.104.185.160   <pending>     8080:31671/TCP   5s
service/kubernetes       ClusterIP      10.96.0.1        <none>        443/TCP          8m36s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-minikube   1/1     1            1           9s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/hello-minikube-6d4df66d87   1         1         1       9s

The problem occurs from version v1.26.0 onwards when launching the command minikube start --driver=none

ubuntu@ip-172-31-29-172:~$ minikube start --driver=none
😄  minikube v1.26.0 on Ubuntu 20.04 (xen/amd64)
✨  Using the none driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🤹  Running on localhost (CPUs=2, Memory=3922MB, Disk=99067MB) ...
ℹ️  OS release is Ubuntu 20.04.4 LTS

❌  Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found

Why does this happen?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Sep 9, 2022

With Kubernetes 1.24, CRI is now mandatory. Since Kubernetes 1.25, CNI is also recommended.

You need to install crictl, from https://github.com/kubernetes-sigs/cri-tools (later cni-plugins)

@afbjorklund afbjorklund added co/none-driver co/runtime/docker Issues specific to a docker runtime labels Sep 9, 2022
@sfl0r3nz05
Copy link
Author

Thanks, for version v1.25 and v1.26 I have installed cri-dockerd and crictl, in addition to conntrack package.

Despite that, the error persists:

ubuntu@ip-172-31-29-172:~/cri-dockerd$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
ubuntu@ip-172-31-29-172:~/cri-dockerd$ kubectl get nodes
NAME               STATUS     ROLES           AGE   VERSION
ip-172-31-29-172   NotReady   control-plane   36s   v1.24.3
ubuntu@ip-172-31-29-172:~/cri-dockerd$ kubectl describe node ip-172-31-29-172
Name:               ip-172-31-29-172
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-172-31-29-172
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/primary=true
                    minikube.k8s.io/updated_at=2022_09_09T08_17_19_0700
                    minikube.k8s.io/version=v1.26.1
                    node-role.kubernetes.io/control-plane=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 09 Sep 2022 08:17:16 +0000
Taints:             node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  ip-172-31-29-172
  AcquireTime:     <unset>
  RenewTime:       Fri, 09 Sep 2022 08:20:02 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 09 Sep 2022 08:17:29 +0000   Fri, 09 Sep 2022 08:17:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 09 Sep 2022 08:17:29 +0000   Fri, 09 Sep 2022 08:17:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 09 Sep 2022 08:17:29 +0000   Fri, 09 Sep 2022 08:17:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 09 Sep 2022 08:17:29 +0000   Fri, 09 Sep 2022 08:17:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  172.31.29.172
  Hostname:    ip-172-31-29-172
Capacity:
  cpu:                2
  ephemeral-storage:  101445540Ki
  hugepages-2Mi:      0
  memory:             4016848Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  101445540Ki
  hugepages-2Mi:      0
  memory:             4016848Ki
  pods:               110
System Info:
  Machine ID:                 5aa8e18df492498e8e5b3d7d8d6a660e
  System UUID:                ec24b325-4c5f-a070-928b-a71ffb4dfac4
  Boot ID:                    262d1872-95ee-41dc-ac3b-0757cb60190d
  Kernel Version:             5.13.0-1029-aws
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.17
  Kubelet Version:            v1.24.3
  Kube-Proxy Version:         v1.24.3
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-ip-172-31-29-172                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m49s
  kube-system                 kube-apiserver-ip-172-31-29-172             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m49s
  kube-system                 kube-controller-manager-ip-172-31-29-172    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m49s
  kube-system                 kube-proxy-2chb2                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
  kube-system                 kube-scheduler-ip-172-31-29-172             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m49s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             100Mi (2%)  0 (0%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age    From             Message
  ----    ------                   ----   ----             -------
  Normal  Starting                 2m34s  kube-proxy
  Normal  Starting                 2m49s  kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  2m49s  kubelet          Node ip-172-31-29-172 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    2m49s  kubelet          Node ip-172-31-29-172 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     2m49s  kubelet          Node ip-172-31-29-172 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  2m49s  kubelet          Updated Node Allocatable limit across pods
  Normal  RegisteredNode           2m37s  node-controller  Node ip-172-31-29-172 event: Registered Node ip-172-31-29-172 in Controller

So, I have received this error: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized.

I suppose it is because I have not enabled the docker driver (--driver=docker) but in this case I do not need to enable any driver (--driver=none) to be able to use the LoadBalancer from minikube tunnel. Any troubleshooting?

@afbjorklund
Copy link
Collaborator

You also need to add a cni configuration, until it is enabled by default.

To use docker network: --network-plugin='' --cni=''

To use bridge network: --network-plugin=cni --cni=bridge

Using CNI is recommended, starting from Kubernetes 1.25 (and 1.24).

@sfl0r3nz05
Copy link
Author

sfl0r3nz05 commented Sep 9, 2022

[update]

I had a problem with cri-dockerd execution:

ubuntu@ip-172-31-28-172:~/cri-dockerd$ sudo systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/cri-docker.service.d
             └─10-cni.conf
     Active: failed (Result: exit-code) since Fri 2022-09-09 09:37:55 UTC; 1min 43s ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
    Process: 5165 ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin >
   Main PID: 5165 (code=exited, status=203/EXEC)

Sep 09 09:37:55 ip-172-31-28-172 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
Sep 09 09:37:55 ip-172-31-28-172 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
Sep 09 09:37:55 ip-172-31-28-172 systemd[1]: cri-docker.service: Start request repeated too quickly.
Sep 09 09:37:55 ip-172-31-28-172 systemd[1]: cri-docker.service: Failed with result 'exit-code'.
Sep 09 09:37:55 ip-172-31-28-172 systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
Sep 09 09:38:20 ip-172-31-28-172 systemd[1]: cri-docker.service: Start request repeated too quickly.
Sep 09 09:38:20 ip-172-31-28-172 systemd[1]: cri-docker.service: Failed with result 'exit-code'.
Sep 09 09:38:20 ip-172-31-28-172 systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.

It was solved when cri-dockerd was moved to /usr/bin/ folder using the command:

sudo install -o root -g root -m 0755 bin/cri-dockerd /usr/bin/cri-dockerd

So, I started the minikube service:

minikube start --driver=none --network-plugin=cni --cni=bridge

Although the cri-dockerd service is working properly:

ubuntu@ip-172-31-28-172:~/cri-dockerd$ sudo systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
     Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/cri-docker.service.d
             └─10-cni.conf
     Active: active (running) since Fri 2022-09-09 09:40:11 UTC; 2min 2s ago
TriggeredBy: ● cri-docker.socket
       Docs: https://docs.mirantis.com
   Main PID: 5378 (cri-dockerd)
      Tasks: 9
     Memory: 15.0M
     CGroup: /system.slice/cri-docker.service
             └─5378 /usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-ca>

Sep 09 09:41:26 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:26Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:41:31 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:31Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:41:36 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:36Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:41:41 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:41Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:41:46 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:46Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:41:51 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:51Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:41:56 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:41:56Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:42:01 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:42:01Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:42:06 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:42:06Z" level=error msg="Error validating CNI config list (\>
Sep 09 09:42:11 ip-172-31-28-172 cri-dockerd[5378]: time="2022-09-09T09:42:11Z" level=error msg="Error validating CNI config list (\>
...skipping...

I guess related to this error message (level=error msg="Error validating CNI config list) still, the node does not initialize properly:

  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

@afbjorklund
Copy link
Collaborator

I had a problem with cri-dockerd execution:
It was solved when cri-dockerd was moved to /usr/bin/ folder using the command:

They addressed that with sed, in the documentation:

https://github.com/Mirantis/cri-dockerd#build-and-install

@afbjorklund
Copy link
Collaborator

You can check the CNI config, in /etc/cni/net.d.

There are other CNI plugins too, such as "flannel".

@afbjorklund afbjorklund added the kind/support Categorizes issue or PR as a support question. label Sep 9, 2022
@xin3liang
Copy link

...

  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 09 Sep 2022 09:40:46 +0000   Fri, 09 Sep 2022 09:40:28 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

I met the same issue "network plugin is not ready" which makes node is in not ready status. I fixed it by following @afbjorklund's CNI plugins install guide: #14724 (comment)

@sfl0r3nz05
Copy link
Author

Thanks, @xin3liang and @afbjorklund. Now it works for me. I will close the issue.

@devopslearn21
Copy link

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

this error occurred with microk8s. any suggestiosn who to solve it

@stevenchou18q7
Copy link

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

this error occurred with microk8s. any suggestiosn who to solve it

The nodes may be tainted with the labels.
image
Just remove it.
kubectl taint node yourNodeName node.kubernetes.io/not-ready:NoSchedule-

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

5 participants