Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: waiting for k8s-app=kube-proxy: timed out waiting for the condition #5161

Closed
AlekseySkovorodnikov opened this issue Aug 21, 2019 · 4 comments
Labels
area/testing co/kube-proxy issues relating to kube-proxy in some way co/none-driver kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@AlekseySkovorodnikov
Copy link

AlekseySkovorodnikov commented Aug 21, 2019

minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd:

**root@instance-275495:~# minikube start --vm-driver=none --extra-config=kubelet.cgroup-driver=systemd
* minikube v1.2.0 on linux (amd64)
* Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
* Restarting existing none VM for "minikube" ...
* Waiting for SSH access ...
* Configuring environment for Kubernetes v1.15.0 on Docker 19.03.1
  - kubelet.cgroup-driver=systemd
  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Relaunching Kubernetes v1.15.0 using kubeadm ...
* Configuring local host environment ...

! The 'none' driver provides limited isolation and may reduce system security and reliability.
! For more information, see:

! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may
! need to relocate them. For example, to overwrite your own settings:

  • sudo mv /root/.kube /root/.minikube $HOME
  • sudo chown -R $USER $HOME/.kube $HOME/.minikube
  • This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
  • Verifying: apiserver proxy

X Wait failed: waiting for k8s-app=kube-proxy: timed out waiting for the condition

  • Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
    **

Too long time of "Verifying: apiserver proxy", and crash after it

minikube logs:

**root@instance-275495:~# minikube logs
==> dmesg <==
[Aug21 09:56]  #2
[  +0.007989]  #3
[  +0.091918] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[  +0.236706] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11
[  +0.479821] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
[  +0.074154] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10
[  +9.161490] sd 2:0:0:0: Power-on or device reset occurred
[  +0.004057] GPT:Primary header thinks Alt. header is not at the end of the disk.
[  +0.000002] GPT:4612095 != 209715199
[  +0.000000] GPT:Alternate GPT header not at the end of the disk.
[  +0.000001] GPT:4612095 != 209715199
[  +0.000000] GPT: Use GNU Parted to correct GPT errors.
[ +11.549923] new mount options do not match the existing superblock, will be ignored
[Aug21 10:36] systemd: 36 output lines suppressed due to ratelimiting
[Aug21 10:37] kauditd_printk_skb: 5 callbacks suppressed
[Aug21 13:10] tee (14070): /proc/13291/oom_adj is deprecated, please use /proc/13291/oom_score_adj instead.
[Aug21 13:35] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000006] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.006499] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000006] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[Aug21 13:53] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000005] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.052891] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000005] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.005566] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000004] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.013919] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000004] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[Aug21 14:14] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000006] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.006591] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000005] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[Aug21 14:17] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000005] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[Aug21 14:18] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000007] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.005806] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000004] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.006784] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000004] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[Aug21 14:28] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000006] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.018875] overlayfs: upperdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.
[  +0.000004] overlayfs: workdir is in-use by another mount, accessing files from both mounts will result in undefined behavior.

==> kernel <==
15:42:27 up 5:46, 1 user, load average: 0.33, 0.36, 0.26
Linux instance-275495 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

==> kube-addon-manager <==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==
Unable to connect to the server: dial tcp: lookup localhost on 8.8.8.8:53: no such host
WRN: == Error getting default service account, retry in 0.5 second ==

==> kube-apiserver <==
E0821 15:22:25.921419 1 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0821 15:22:25.921534 1 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0821 15:22:25.921622 1 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0821 15:22:25.921732 1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook.
I0821 15:22:25.921772 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0821 15:22:25.923945 1 client.go:354] parsed scheme: ""
I0821 15:22:25.924059 1 client.go:354] scheme "" not registered, fallback to default scheme
I0821 15:22:25.924165 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
I0821 15:22:25.924352 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0821 15:22:25.935087 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0821 15:22:25.935818 1 client.go:354] parsed scheme: ""
I0821 15:22:25.935939 1 client.go:354] scheme "" not registered, fallback to default scheme
I0821 15:22:25.936052 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
I0821 15:22:25.936266 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0821 15:22:25.947274 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0821 15:22:28.039140 1 secure_serving.go:116] Serving securely on [::]:8443
I0821 15:22:28.039872 1 available_controller.go:374] Starting AvailableConditionController
I0821 15:22:28.039933 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0821 15:22:28.039965 1 controller.go:81] Starting OpenAPI AggregationController
I0821 15:22:28.040022 1 autoregister_controller.go:140] Starting autoregister controller
I0821 15:22:28.040040 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0821 15:22:28.040122 1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0821 15:22:28.040154 1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
E0821 15:22:28.045162 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.0.1.6, ResourceVersion: 0, AdditionalErrorMsg:
I0821 15:22:28.064827 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0821 15:22:28.064855 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0821 15:22:28.064895 1 crd_finalizer.go:255] Starting CRDFinalizer
I0821 15:22:28.064929 1 controller.go:83] Starting OpenAPI controller
I0821 15:22:28.064944 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0821 15:22:28.064965 1 naming_controller.go:288] Starting NamingConditionController
I0821 15:22:28.064983 1 establishing_controller.go:73] Starting EstablishingController
I0821 15:22:28.065020 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0821 15:22:28.153378 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0821 15:22:28.153696 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0821 15:22:28.153703 1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0821 15:22:28.153716 1 cache.go:39] Caches are synced for autoregister controller
I0821 15:22:28.164919 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0821 15:22:29.036990 1 controller.go:107] OpenAPI AggregationController: Processing item
I0821 15:22:29.037012 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0821 15:22:29.037025 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0821 15:22:29.048791 1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0821 15:22:29.051938 1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0821 15:22:29.051959 1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0821 15:22:30.820314 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0821 15:22:31.100389 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0821 15:22:31.424878 1 lease.go:223] Resetting endpoints for master service "kubernetes" to [10.0.1.6]
I0821 15:22:31.425449 1 controller.go:606] quota admission added evaluator for: endpoints
I0821 15:22:31.536389 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0821 15:22:31.553209 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0821 15:22:31.582954 1 controller.go:606] quota admission added evaluator for: daemonsets.apps

==> kube-scheduler <==
E0821 15:42:22.608849 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609044 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609050 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609121 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609172 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609217 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609178 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609333 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609447 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:22.609530 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642642 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642644 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642683 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642753 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642763 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642797 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642851 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642905 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.642971 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:23.643074 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678245 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678278 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678343 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678394 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678406 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678412 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678438 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678480 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678578 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:24.678711 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711623 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711645 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711661 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711678 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711709 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711719 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711751 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711759 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711883 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:25.711930 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.748968 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749022 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749040 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749093 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749102 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749178 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749189 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749316 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=status.phase%3DFailed%!C(MISSING)status.phase%3DSucceeded&limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749418 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host
E0821 15:42:26.749532 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp: lookup localhost on 8.8.8.8:53: no such host

==> kubelet <==
-- Logs begin at Wed 2019-08-21 09:56:37 UTC, end at Wed 2019-08-21 15:42:27 UTC. --
Aug 21 15:41:42 instance-275495 kubelet[9466]: W0821 15:41:42.753809 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_8852_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_8852_systemd_test_default.slice: no such file or directory
Aug 21 15:41:42 instance-275495 kubelet[9466]: W0821 15:41:42.753866 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_8852_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_8852_systemd_test_default.slice: no such file or directory
Aug 21 15:41:42 instance-275495 kubelet[9466]: W0821 15:41:42.782121 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/libcontainer_8858_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/libcontainer_8858_systemd_test_default.slice: no such file or directory
Aug 21 15:41:42 instance-275495 kubelet[9466]: W0821 15:41:42.782165 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_8858_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_8858_systemd_test_default.slice: no such file or directory
Aug 21 15:41:42 instance-275495 kubelet[9466]: W0821 15:41:42.782184 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_8858_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_8858_systemd_test_default.slice: no such file or directory
Aug 21 15:41:42 instance-275495 kubelet[9466]: W0821 15:41:42.782275 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_8858_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_8858_systemd_test_default.slice: no such file or directory
Aug 21 15:41:52 instance-275495 kubelet[9466]: W0821 15:41:52.934821 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9120_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9120_systemd_test_default.slice: no such file or directory
Aug 21 15:41:52 instance-275495 kubelet[9466]: W0821 15:41:52.935876 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9120_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9120_systemd_test_default.slice: no such file or directory
Aug 21 15:41:52 instance-275495 kubelet[9466]: W0821 15:41:52.960271 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9126_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9126_systemd_test_default.slice: no such file or directory
Aug 21 15:41:52 instance-275495 kubelet[9466]: W0821 15:41:52.960307 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9126_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9126_systemd_test_default.slice: no such file or directory
Aug 21 15:41:52 instance-275495 kubelet[9466]: W0821 15:41:52.960324 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9126_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9126_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.763420 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/libcontainer_9361_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/libcontainer_9361_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.763856 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9361_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9361_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.764035 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9361_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9361_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.764293 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9361_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9361_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.791804 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9367_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9367_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.950588 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9401_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9401_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.950846 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9401_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9401_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.950900 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9401_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9401_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.953810 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9401_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9401_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.953852 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9401_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9401_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.953884 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9401_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9401_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.983374 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9407_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9407_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.983419 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9407_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9407_systemd_test_default.slice: no such file or directory
Aug 21 15:42:02 instance-275495 kubelet[9466]: W0821 15:42:02.983451 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9407_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9407_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.008026 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9413_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9413_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.008276 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9413_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9413_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.008316 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9413_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9413_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.037471 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9419_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9419_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.037751 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9419_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9419_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.037788 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9419_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9419_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.040786 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9419_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9419_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.040827 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9419_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9419_systemd_test_default.slice: no such file or directory
Aug 21 15:42:03 instance-275495 kubelet[9466]: W0821 15:42:03.040858 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9419_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9419_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.751989 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9561_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9561_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.752071 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9561_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9561_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.752140 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9561_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9561_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.984258 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9620_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/libcontainer_9620_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.984431 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9620_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9620_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.984645 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9620_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9620_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.987806 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9620_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.987870 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9620_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9620_systemd_test_default.slice: no such file or directory
Aug 21 15:42:12 instance-275495 kubelet[9466]: W0821 15:42:12.987897 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9620_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9620_systemd_test_default.slice: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.012698 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/libcontainer_9626_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.012742 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9626_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9626_systemd_test_default.slice: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.012868 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9626_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9626_systemd_test_default.slice: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.016511 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9626_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9626_systemd_test_default.slice: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.016551 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9626_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9626_systemd_test_default.slice: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.042539 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/libcontainer_9633_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/libcontainer_9633_systemd_test_default.slice: no such file or directory
Aug 21 15:42:13 instance-275495 kubelet[9466]: W0821 15:42:13.042750 9466 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/libcontainer_9633_systemd_test_default.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/libcontainer_9633_systemd_test_default.slice: no such file or directory
**:

Contents of config files:
daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --authorization-mode=Webhook
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf
--cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt
--cluster-dns=10.96.0.10 --cluster-domain=cluster.local
--container-runtime=docker --fail-swap-on=false
--hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf
--pod-manifest-path=/etc/kubernetes/manifests --resolv-conf=/run/systemd/resolve/resolv.conf
[Install]

Ubunt 18.04 LTS:

@tstromberg tstromberg added co/apiserver Issues relating to apiserver configuration (--extra-config) co/none-driver ev/apiserver-timeout timeout talking to the apiserver labels Aug 21, 2019
@tstromberg tstromberg changed the title Verifying: apiserver proxy is too long and get error none: waiting for k8s-app=kube-proxy: timed out waiting for the condition Aug 21, 2019
@tstromberg
Copy link
Contributor

Hi! Do you mind sharing the output of:

docker ps -a | grep k8s_kube-proxy

and

docker ps -a | grep k8s_kube-proxy | awk '{ print $1 }' | xargs docker logs

I suspect that kube-proxy is crash-looping but can't tell why from the logs.

@tstromberg tstromberg added co/kube-proxy issues relating to kube-proxy in some way triage/needs-information Indicates an issue needs more information in order to work on it. and removed co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver labels Aug 21, 2019
@medyagh
Copy link
Member

medyagh commented Aug 21, 2019

I believe we see the same error in the integration test on the Latest version 1.3.1 but we dont see same error on the HEAD minikube !

See
#5163

@medyagh
Copy link
Member

medyagh commented Aug 21, 2019

@AlekseySkovorodnikov I confirm this issue and I thank you so much for taking the time to come and reporting it ! this seems to be a flake and depend on the timeout duration we used to wait for them to come up.

@AlekseySkovorodnikov do you mind trying to see if the error happens on the current HEAD minikube ?

you could build minikube by clonning this repo
and then running

make out/minikube

and then you could run it like this
./out/minikube start .... (insert your options) --wait-timeout=7m

I am very curious if changing our retry logic has fixed that issue ?

@medyagh medyagh added kind/flake Categorizes issue or PR as related to a flaky test. area/testing priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Aug 21, 2019
@AlekseySkovorodnikov
Copy link
Author

Hi! Do you mind sharing the output of:

docker ps -a | grep k8s_kube-proxy

and

docker ps -a | grep k8s_kube-proxy | awk '{ print $1 }' | xargs docker logs

I suspect that kube-proxy is crash-looping but can't tell why from the logs.

Hi!
I killed this VM :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/testing co/kube-proxy issues relating to kube-proxy in some way co/none-driver kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

3 participants