-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube addons enable ingress failed #8841
Comments
We don't support the ingress addon with the none driver, we should fail with a good error message, rather than just spewing logs. |
/assign |
@sharifelgamal I'd appreciate if you could take a look at #8870 when you get a chance to see if the warning message is good. |
I think we've done the wrong thing here, we should remove that warning and increase the addons timeout instead. |
It's not convenient where minikube changed default driver between minor releases minikube start --driver='hyperkit' && \
minikube addons enable ingress For linux I use next bootstrap command minikube start --driver='docker' && \
minikube addons enable ingress |
When will we be able to use ingress addon with Docker driver on Mac? |
@marcusthelin we have an issue (#7332) specifically tracking ingress on docker macos. it's one of our top priorities. |
@sharifelgamal Thank you! |
closing this in favor of #7332 |
Steps to reproduce the issue:
Hello i'm trying to enable ingress addons in minikube
1.root@kmaster-01:/etc/kubernetes/Ingress# minikube addons enable ingress
🔎 Verifying ingress addon...
💣 enable failed: run callbacks: running callbacks: [verifying ingress addon pods : timed out waiting for the condition: timed out waiting for the condition]
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
2.
3.
Full output of failed command:
1.root@kmaster-01:/etc/kubernetes/Ingress# minikube addons enable ingress
🔎 Verifying ingress addon...
💣 enable failed: run callbacks: running callbacks: [verifying ingress addon pods : timed out waiting for the condition: timed out waiting for the condition]
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
2.
Full output of
minikube start
command used, if not already included:root@kmaster-01:/etc/kubernetes/Ingress# minikube start
😄 minikube v1.12.1 on Debian bullseye/sid
✨ Using the none driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🏃 Updating the running none "minikube" bare metal machine ...
ℹ️ OS release is Debian GNU/Linux bullseye/sid
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
🤹 Configuring local host environment ...
❗ The 'none' driver is designed for experts who need to integrate with an existing VM
💡 Most users should use the newer 'docker' driver instead, which does not require root!
📘 For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
❗ kubectl and minikube configuration will be stored in /root
❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
🔎 Verifying Kubernetes components...
🌟 Enabled addons: dashboard, default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
Optional: Full output of
minikube logs
command:==> container status <==
sudo: crictl: command not found
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
12a744acb616 k8s.gcr.io/pause:3.2 "/pause" 4 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_313
53be4d929eb4 k8s.gcr.io/pause:3.2 "/pause" 15 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_312
b98f0d5c1244 k8s.gcr.io/pause:3.2 "/pause" 24 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_311
0b962de8c87c k8s.gcr.io/pause:3.2 "/pause" 32 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_310
1e2c12ff4b00 k8s.gcr.io/pause:3.2 "/pause" 38 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_309
7977802dfa6c k8s.gcr.io/pause:3.2 "/pause" 46 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_308
b931944edb0a k8s.gcr.io/pause:3.2 "/pause" 52 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_307
2d9c1e718de5 k8s.gcr.io/pause:3.2 "/pause" 59 seconds ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_306
d68e8bfd8018 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_305
33a9161cc485 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Created k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_304
96806baf5a50 k8s.gcr.io/busybox "/bin/sh -c 'touch /…" About a minute ago Up About a minute k8s_liveness_liveness-exec_default_864baeac-abd1-4c51-9883-245eb37be672_144
54313525ccf8 k8s.gcr.io/busybox "/bin/sh -c 'touch /…" 2 minutes ago Exited (137) About a minute ago k8s_liveness_liveness-exec_default_864baeac-abd1-4c51-9883-245eb37be672_143
b35f2a429d52 us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller "/usr/bin/dumb-init …" 5 minutes ago Up 5 minutes k8s_controller_ingress-nginx-controller-c96557986-grglz_ingress-nginx_56dd19ce-71c9-4f74-a6fa-29ed113ba631_0
8fb8b9a04823 k8s.gcr.io/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_ingress-nginx-controller-c96557986-grglz_ingress-nginx_56dd19ce-71c9-4f74-a6fa-29ed113ba631_0
6bb904aa39d7 5693ebf5622a "/kube-webhook-certg…" 7 minutes ago Exited (0) 7 minutes ago k8s_patch_ingress-nginx-admission-patch-pnqr4_ingress-nginx_96be9754-dc2c-43a2-8c28-6bb4cc94b1e0_1
0b21fe8c9bd5 5693ebf5622a "/kube-webhook-certg…" 8 minutes ago Exited (0) 7 minutes ago k8s_create_ingress-nginx-admission-create-ltm76_ingress-nginx_16dc3b2f-c94a-4374-9a96-977771e8c768_0
303fc28d51f5 k8s.gcr.io/pause:3.2 "/pause" 8 minutes ago Exited (137) 7 minutes ago k8s_POD_ingress-nginx-admission-patch-pnqr4_ingress-nginx_96be9754-dc2c-43a2-8c28-6bb4cc94b1e0_0
0d2ffac8d52b k8s.gcr.io/pause:3.2 "/pause" 8 minutes ago Exited (137) 7 minutes ago k8s_POD_ingress-nginx-admission-create-ltm76_ingress-nginx_16dc3b2f-c94a-4374-9a96-977771e8c768_0
50bc65609c7f 5693ebf5622a "/kube-webhook-certg…" 37 minutes ago Exited (0) 36 minutes ago k8s_patch_ingress-nginx-admission-patch-9cmsj_kube-system_0140e2de-9359-485a-aa26-7a88ebf2e701_1
68ac3aa5fd03 jettech/kube-webhook-certgen "/kube-webhook-certg…" 37 minutes ago Exited (0) 37 minutes ago k8s_create_ingress-nginx-admission-create-4bnt4_kube-system_2e57fd7e-2ee8-4e46-8ab1-0ea153b91a1e_0
57e5f8f3a03a k8s.gcr.io/pause:3.2 "/pause" 37 minutes ago Exited (0) 36 minutes ago k8s_POD_ingress-nginx-admission-patch-9cmsj_kube-system_0140e2de-9359-485a-aa26-7a88ebf2e701_0
fb51e2a22ab4 k8s.gcr.io/pause:3.2 "/pause" 37 minutes ago Exited (0) 36 minutes ago k8s_POD_ingress-nginx-admission-create-4bnt4_kube-system_2e57fd7e-2ee8-4e46-8ab1-0ea153b91a1e_0
ca1d037a84dc k8s.gcr.io/pause:3.2 "/pause" 24 hours ago Up 24 hours k8s_POD_liveness-exec_default_864baeac-abd1-4c51-9883-245eb37be672_0
1f3dc179ed34 ecd67fe340f9 "/docker-entrypoint.…" 24 hours ago Up 24 hours k8s_nginx_webserver-97499b967-fhfqz_default_93a01570-ea54-4884-b96e-cdfc610678cc_0
3967f0074a6f ecd67fe340f9 "/docker-entrypoint.…" 24 hours ago Up 24 hours k8s_nginx_webserver-97499b967-mt6kj_default_805f989f-0235-40c8-be7f-e258d341fc31_0
22bfba87866d ecd67fe340f9 "/docker-entrypoint.…" 24 hours ago Up 24 hours k8s_nginx_webserver-97499b967-hr7wk_default_9b17d749-1dad-4644-9a81-93f0dd0211f9_0
56f066698927 k8s.gcr.io/pause:3.2 "/pause" 24 hours ago Up 24 hours k8s_POD_webserver-97499b967-fhfqz_default_93a01570-ea54-4884-b96e-cdfc610678cc_0
42314f2445ff k8s.gcr.io/pause:3.2 "/pause" 24 hours ago Up 24 hours k8s_POD_webserver-97499b967-mt6kj_default_805f989f-0235-40c8-be7f-e258d341fc31_0
519c9b288dcb k8s.gcr.io/pause:3.2 "/pause" 24 hours ago Up 24 hours k8s_POD_webserver-97499b967-hr7wk_default_9b17d749-1dad-4644-9a81-93f0dd0211f9_0
ee19cda869b7 nginx "/docker-entrypoint.…" 24 hours ago Up 24 hours k8s_nginx_nginx-745b4df97d-p2nr9_lfs158_28f51045-ec0c-418c-b8a7-2397c81df39d_0
46d3f1d59344 k8s.gcr.io/pause:3.2 "/pause" 24 hours ago Up 24 hours k8s_POD_nginx-745b4df97d-p2nr9_lfs158_28f51045-ec0c-418c-b8a7-2397c81df39d_0
0b3cf2acce4f kubernetesui/metrics-scraper "/metrics-sidecar" 30 hours ago Up 30 hours k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-dc6947fbf-9nvf8_kubernetes-dashboard_ef75284d-8bdb-4c72-91de-d67972601d3c_0
d616f68c7110 kubernetesui/dashboard "/dashboard --insecu…" 30 hours ago Up 30 hours k8s_kubernetes-dashboard_kubernetes-dashboard-6dbb54fd95-f4pq9_kubernetes-dashboard_ca1da9a0-82b6-4126-9570-0cc92a2178c6_0
be9cfb7dd714 k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_kubernetes-dashboard-6dbb54fd95-f4pq9_kubernetes-dashboard_ca1da9a0-82b6-4126-9570-0cc92a2178c6_0
fa244914d3fc k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_dashboard-metrics-scraper-dc6947fbf-9nvf8_kubernetes-dashboard_ef75284d-8bdb-4c72-91de-d67972601d3c_0
1253789e61fa 67da37a9a360 "/coredns -conf /etc…" 30 hours ago Up 30 hours k8s_coredns_coredns-66bff467f8-4wwxt_kube-system_2f6ae3b5-b31f-4638-a8f6-fbf9c2160f5c_1
ca0634e133af 4689081edb10 "/storage-provisioner" 30 hours ago Up 30 hours k8s_storage-provisioner_storage-provisioner_kube-system_51ad85a2-2f92-4cb7-939c-96bec059a2c7_1
e5e9e87eee38 3439b7546f29 "/usr/local/bin/kube…" 30 hours ago Up 30 hours k8s_kube-proxy_kube-proxy-767vp_kube-system_61706725-b231-4f5d-8435-ee4827caf8a4_1
b53b229df4fb k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_storage-provisioner_kube-system_51ad85a2-2f92-4cb7-939c-96bec059a2c7_1
23bc72aefa0b k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_kube-proxy-767vp_kube-system_61706725-b231-4f5d-8435-ee4827caf8a4_1
b2276c109b5c k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_coredns-66bff467f8-4wwxt_kube-system_2f6ae3b5-b31f-4638-a8f6-fbf9c2160f5c_1
793df161b222 7e28efa976bd "kube-apiserver --ad…" 30 hours ago Up 30 hours k8s_kube-apiserver_kube-apiserver-kmaster-01_kube-system_4ddb50c6610f8e82520c56afcb7dbd76_1
aead8f81b956 da26705ccb4b "kube-controller-man…" 30 hours ago Up 30 hours k8s_kube-controller-manager_kube-controller-manager-kmaster-01_kube-system_d16b572ec306724504d73dd594538db5_2
bfb9f333328d 303ce5db0e90 "etcd --advertise-cl…" 30 hours ago Up 30 hours k8s_etcd_etcd-kmaster-01_kube-system_28b71ebc81d78cacba0d10e5b4065ed0_1
e86ee2dbbb76 76216c34ed0c "kube-scheduler --au…" 30 hours ago Up 30 hours k8s_kube-scheduler_kube-scheduler-kmaster-01_kube-system_dcddbd0cc8c89e2cbf4de5d3cca8769f_1
873fde4622a0 k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_etcd-kmaster-01_kube-system_28b71ebc81d78cacba0d10e5b4065ed0_1
8bfb6aa15ad1 k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_kube-scheduler-kmaster-01_kube-system_dcddbd0cc8c89e2cbf4de5d3cca8769f_1
e89fb0f424e0 k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_kube-apiserver-kmaster-01_kube-system_4ddb50c6610f8e82520c56afcb7dbd76_1
6e6413bd12ff k8s.gcr.io/pause:3.2 "/pause" 30 hours ago Up 30 hours k8s_POD_kube-controller-manager-kmaster-01_kube-system_d16b572ec306724504d73dd594538db5_1
01fdf86ab4a2 gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 44 hours ago Exited (255) 30 hours ago k8s_storage-provisioner_storage-provisioner_kube-system_51ad85a2-2f92-4cb7-939c-96bec059a2c7_0
8f1c3aef0d2e k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_storage-provisioner_kube-system_51ad85a2-2f92-4cb7-939c-96bec059a2c7_0
0951f8e71422 67da37a9a360 "/coredns -conf /etc…" 44 hours ago Exited (255) 30 hours ago k8s_coredns_coredns-66bff467f8-4wwxt_kube-system_2f6ae3b5-b31f-4638-a8f6-fbf9c2160f5c_0
73f3f8e3ca78 3439b7546f29 "/usr/local/bin/kube…" 44 hours ago Exited (255) 30 hours ago k8s_kube-proxy_kube-proxy-767vp_kube-system_61706725-b231-4f5d-8435-ee4827caf8a4_0
b84ec3a94888 k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_coredns-66bff467f8-4wwxt_kube-system_2f6ae3b5-b31f-4638-a8f6-fbf9c2160f5c_0
5136cc4add8c k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_kube-proxy-767vp_kube-system_61706725-b231-4f5d-8435-ee4827caf8a4_0
d165bc7d43f8 da26705ccb4b "kube-controller-man…" 44 hours ago Exited (255) 30 hours ago k8s_kube-controller-manager_kube-controller-manager-kmaster-01_kube-system_d16b572ec306724504d73dd594538db5_1
b43e1c0c3d5f 76216c34ed0c "kube-scheduler --au…" 44 hours ago Exited (255) 30 hours ago k8s_kube-scheduler_kube-scheduler-kmaster-01_kube-system_dcddbd0cc8c89e2cbf4de5d3cca8769f_0
3a1054cc3424 7e28efa976bd "kube-apiserver --ad…" 44 hours ago Exited (255) 30 hours ago k8s_kube-apiserver_kube-apiserver-kmaster-01_kube-system_4ddb50c6610f8e82520c56afcb7dbd76_0
cbd525c9ef77 303ce5db0e90 "etcd --advertise-cl…" 44 hours ago Exited (255) 30 hours ago k8s_etcd_etcd-kmaster-01_kube-system_28b71ebc81d78cacba0d10e5b4065ed0_0
80b92bdefb6d k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_etcd-kmaster-01_kube-system_28b71ebc81d78cacba0d10e5b4065ed0_0
808418240d25 k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_kube-scheduler-kmaster-01_kube-system_dcddbd0cc8c89e2cbf4de5d3cca8769f_0
c57517b2dfc1 k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_kube-controller-manager-kmaster-01_kube-system_d16b572ec306724504d73dd594538db5_0
e063c73cfabb k8s.gcr.io/pause:3.2 "/pause" 44 hours ago Exited (255) 30 hours ago k8s_POD_kube-apiserver-kmaster-01_kube-system_4ddb50c6610f8e82520c56afcb7dbd76_0
==> coredns [0951f8e71422] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
==> coredns [1253789e61fa] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0724 10:33:41.306817 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-07-24 10:33:10.725044037 +0000 UTC m=+5.040809241) (total time: 30.042532967s):
Trace[2019727887]: [30.042532967s] [30.042532967s] END
E0724 10:33:41.306864 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0724 10:33:41.306902 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-07-24 10:33:10.725095437 +0000 UTC m=+5.040860341) (total time: 30.042507067s):
Trace[1427131847]: [30.042507067s] [30.042507067s] END
E0724 10:33:41.306922 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0724 10:33:41.306969 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-07-24 10:33:10.725107837 +0000 UTC m=+5.040872341) (total time: 30.042513867s):
Trace[939984059]: [30.042513867s] [30.042513867s] END
E0724 10:33:41.306995 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: kmaster-01
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kmaster-01
kubernetes.io/os=linux
minikube.k8s.io/commit=5664228288552de9f3a446ea4f51c6f29bbdd0e0-dirty
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_07_23T20_22_19_0700
minikube.k8s.io/version=v1.12.1
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 23 Jul 2020 20:21:42 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: kmaster-01
AcquireTime:
RenewTime: Sat, 25 Jul 2020 16:11:07 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Sat, 25 Jul 2020 16:10:36 +0000 Thu, 23 Jul 2020 20:21:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 25 Jul 2020 16:10:36 +0000 Thu, 23 Jul 2020 20:21:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 25 Jul 2020 16:10:36 +0000 Thu, 23 Jul 2020 20:21:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 25 Jul 2020 16:10:36 +0000 Thu, 23 Jul 2020 20:21:53 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.1.60
Hostname: kmaster-01
Capacity:
cpu: 8
ephemeral-storage: 262656100Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 20451728Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 262656100Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 20451728Ki
pods: 110
System Info:
Machine ID: 65ff3fbfc90d437bad239ef808d15c54
System UUID: 7899442f-fb48-4e62-9340-ef113c37db01
Boot ID: 891e6b9a-f020-4017-8fe9-06f755d84fec
Kernel Version: 5.7.0-1-amd64
OS Image: Debian GNU/Linux bullseye/sid
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.12
Kubelet Version: v1.18.3
Kube-Proxy Version: v1.18.3
Non-terminated Pods: (16 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
default liveness-exec 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h
default webserver-97499b967-fhfqz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h
default webserver-97499b967-hr7wk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h
default webserver-97499b967-mt6kj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 23h
ingress-nginx ingress-nginx-controller-c96557986-grglz 100m (1%) 0 (0%) 90Mi (0%) 0 (0%) 9m11s
kube-system coredns-66bff467f8-4wwxt 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 43h
kube-system etcd-kmaster-01 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43h
kube-system ingress-nginx-controller-69ccf5d9d8-7qvfn 100m (1%) 0 (0%) 90Mi (0%) 0 (0%) 38m
kube-system kube-apiserver-kmaster-01 250m (3%) 0 (0%) 0 (0%) 0 (0%) 43h
kube-system kube-controller-manager-kmaster-01 200m (2%) 0 (0%) 0 (0%) 0 (0%) 43h
kube-system kube-proxy-767vp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43h
kube-system kube-scheduler-kmaster-01 100m (1%) 0 (0%) 0 (0%) 0 (0%) 43h
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43h
kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-9nvf8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29h
kubernetes-dashboard kubernetes-dashboard-6dbb54fd95-f4pq9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29h
lfs158 nginx-745b4df97d-p2nr9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 850m (10%) 0 (0%)
memory 250Mi (1%) 170Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
==> dmesg <==
[Jul25 01:08] #3 #4 #5 #6 #7
[ +1.135033] PCI: System does not support PCI
[ +1.308314] Unstable clock detected, switching default tracing clock to "global"
If you want to keep using the local clock, then add:
"trace_clock=local"
on the kernel command line
[ +0.505548] process '/usr/bin/fstype' started with executable stack
[ +8.655000] systemd-journald[236]: File /var/log/journal/65ff3fbfc90d437bad239ef808d15c54/system.journal corrupted or uncleanly shut down, renaming and replacing.
[Jul25 01:09] kauditd_printk_skb: 6 callbacks suppressed
[Jul25 01:10] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Jul25 06:42] hrtimer: interrupt took 6024435 ns
==> etcd [bfb9f333328d] <==
2020-07-25 16:09:51.709636 W | etcdserver: request "header:<ID:830759651496533096 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/events/default/liveness-exec.1624bd8a14d92603" mod_revision:12120 > success:<request_put:<key:"/registry/events/default/liveness-exec.1624bd8a14d92603" value_size:711 lease:830759651496533094 >> failure:<request_range:<key:"/registry/events/default/liveness-exec.1624bd8a14d92603" > >>" with result "size:16" took too long (341.508264ms) to execute
2020-07-25 16:09:54.751715 W | etcdserver: read-only range request "key:"/registry/endpointslices" range_end:"/registry/endpointslicet" count_only:true " with result "range_response_count:0 size:7" took too long (307.314087ms) to execute
2020-07-25 16:09:59.487353 W | etcdserver: request "header:<ID:830759651496533111 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12594 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533109 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (276.254527ms) to execute
2020-07-25 16:10:07.322652 W | etcdserver: read-only range request "key:"/registry/serviceaccounts" range_end:"/registry/serviceaccountt" count_only:true " with result "range_response_count:0 size:7" took too long (268.115384ms) to execute
2020-07-25 16:10:09.838711 W | etcdserver: request "header:<ID:830759651496533137 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12598 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533135 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (283.279562ms) to execute
2020-07-25 16:10:09.839100 W | etcdserver: read-only range request "key:"/registry/priorityclasses" range_end:"/registry/priorityclasset" count_only:true " with result "range_response_count:0 size:7" took too long (137.44091ms) to execute
2020-07-25 16:10:10.313311 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (188.948976ms) to execute
2020-07-25 16:10:10.313534 W | etcdserver: read-only range request "key:"/registry/jobs/" range_end:"/registry/jobs0" limit:500 " with result "range_response_count:4 size:15245" took too long (270.051494ms) to execute
2020-07-25 16:10:17.556261 W | etcdserver: read-only range request "key:"/registry/mutatingwebhookconfigurations" range_end:"/registry/mutatingwebhookconfigurationt" count_only:true " with result "range_response_count:0 size:5" took too long (615.552977ms) to execute
2020-07-25 16:10:19.640114 W | wal: sync duration of 1.027379703s, expected less than 1s
2020-07-25 16:10:19.717889 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result "range_response_count:1 size:257" took too long (685.989041ms) to execute
2020-07-25 16:10:20.332382 W | etcdserver: request "header:<ID:830759651496533163 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12601 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533161 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (458.102765ms) to execute
2020-07-25 16:10:20.949260 W | etcdserver: read-only range request "key:"/registry/resourcequotas" range_end:"/registry/resourcequotat" count_only:true " with result "range_response_count:0 size:5" took too long (364.26298ms) to execute
2020-07-25 16:10:24.858209 W | etcdserver: read-only range request "key:"/registry/services/specs/ingress-nginx/ingress-nginx-controller" " with result "range_response_count:1 size:2279" took too long (175.556506ms) to execute
2020-07-25 16:10:29.575646 W | etcdserver: request "header:<ID:830759651496533186 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12605 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533184 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (365.992389ms) to execute
2020-07-25 16:10:34.568645 W | etcdserver: read-only range request "key:"/registry/storageclasses" range_end:"/registry/storageclasset" count_only:true " with result "range_response_count:0 size:7" took too long (157.531213ms) to execute
2020-07-25 16:10:34.568707 W | etcdserver: read-only range request "key:"/registry/daemonsets" range_end:"/registry/daemonsett" count_only:true " with result "range_response_count:0 size:7" took too long (249.132386ms) to execute
2020-07-25 16:10:34.568902 W | etcdserver: read-only range request "key:"/registry/namespaces/kube-system" " with result "range_response_count:1 size:263" took too long (621.857409ms) to execute
2020-07-25 16:10:37.411644 W | etcdserver: read-only range request "key:"/registry/horizontalpodautoscalers" range_end:"/registry/horizontalpodautoscalert" count_only:true " with result "range_response_count:0 size:5" took too long (356.548039ms) to execute
2020-07-25 16:10:37.985971 W | etcdserver: read-only range request "key:"/registry/events" range_end:"/registry/eventt" count_only:true " with result "range_response_count:0 size:7" took too long (303.563766ms) to execute
2020-07-25 16:10:37.986092 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper" " with result "range_response_count:1 size:891" took too long (522.825298ms) to execute
2020-07-25 16:10:39.828880 W | etcdserver: request "header:<ID:830759651496533214 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12608 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533212 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (667.434044ms) to execute
2020-07-25 16:10:40.720130 W | etcdserver: read-only range request "key:"/registry/jobs/" range_end:"/registry/jobs0" limit:500 " with result "range_response_count:4 size:15245" took too long (371.876419ms) to execute
2020-07-25 16:10:42.028261 W | etcdserver: read-only range request "key:"/registry/csidrivers" range_end:"/registry/csidrivert" count_only:true " with result "range_response_count:0 size:5" took too long (707.465151ms) to execute
2020-07-25 16:10:42.028779 W | etcdserver: read-only range request "key:"/registry/cronjobs" range_end:"/registry/cronjobt" count_only:true " with result "range_response_count:0 size:5" took too long (830.537285ms) to execute
2020-07-25 16:10:42.029157 W | etcdserver: read-only range request "key:"/registry/ingress" range_end:"/registry/ingrest" count_only:true " with result "range_response_count:0 size:5" took too long (431.125524ms) to execute
2020-07-25 16:10:44.237182 W | etcdserver: read-only range request "key:"/registry/controllerrevisions" range_end:"/registry/controllerrevisiont" count_only:true " with result "range_response_count:0 size:7" took too long (872.976804ms) to execute
2020-07-25 16:10:44.784841 W | etcdserver: read-only range request "key:"/registry/validatingwebhookconfigurations" range_end:"/registry/validatingwebhookconfigurationt" count_only:true " with result "range_response_count:0 size:7" took too long (802.915143ms) to execute
2020-07-25 16:10:44.785084 W | etcdserver: read-only range request "key:"/registry/mutatingwebhookconfigurations" range_end:"/registry/mutatingwebhookconfigurationt" count_only:true " with result "range_response_count:0 size:5" took too long (1.172369949s) to execute
2020-07-25 16:10:44.785486 W | etcdserver: request "header:<ID:830759651496533229 > lease_revoke:id:0b877380612d9cc0" with result "size:28" took too long (356.59754ms) to execute
2020-07-25 16:10:44.785762 W | etcdserver: read-only range request "key:"/registry/leases" range_end:"/registry/leaset" count_only:true " with result "range_response_count:0 size:7" took too long (463.846194ms) to execute
2020-07-25 16:10:45.479074 W | etcdserver: read-only range request "key:"/registry/configmaps" range_end:"/registry/configmapt" count_only:true " with result "range_response_count:0 size:7" took too long (238.956533ms) to execute
2020-07-25 16:10:48.301602 W | etcdserver: read-only range request "key:"/registry/events" range_end:"/registry/eventt" count_only:true " with result "range_response_count:0 size:7" took too long (180.177529ms) to execute
2020-07-25 16:10:49.088195 W | etcdserver: read-only range request "key:"/registry/clusterroles" range_end:"/registry/clusterrolet" count_only:true " with result "range_response_count:0 size:7" took too long (177.965019ms) to execute
2020-07-25 16:10:49.798959 W | etcdserver: request "header:<ID:830759651496533247 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12612 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533245 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (519.253579ms) to execute
2020-07-25 16:10:49.799213 W | etcdserver: read-only range request "key:"/registry/services/endpoints" range_end:"/registry/services/endpointt" count_only:true " with result "range_response_count:0 size:7" took too long (269.066688ms) to execute
2020-07-25 16:10:50.755025 W | etcdserver: read-only range request "key:"/registry/rolebindings" range_end:"/registry/rolebindingt" count_only:true " with result "range_response_count:0 size:7" took too long (260.946646ms) to execute
2020-07-25 16:10:54.889284 W | etcdserver: read-only range request "key:"/registry/priorityclasses" range_end:"/registry/priorityclasset" count_only:true " with result "range_response_count:0 size:7" took too long (363.612676ms) to execute
2020-07-25 16:10:57.064891 W | etcdserver: read-only range request "key:"/registry/certificatesigningrequests" range_end:"/registry/certificatesigningrequestt" count_only:true " with result "range_response_count:0 size:5" took too long (728.566659ms) to execute
2020-07-25 16:10:57.065073 W | etcdserver: read-only range request "key:"/registry/ingress" range_end:"/registry/ingrest" count_only:true " with result "range_response_count:0 size:5" took too long (411.552423ms) to execute
2020-07-25 16:10:59.523250 W | etcdserver: request "header:<ID:830759651496533276 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12617 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533274 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (332.888217ms) to execute
2020-07-25 16:10:59.523432 W | etcdserver: read-only range request "key:"/registry/minions" range_end:"/registry/miniont" count_only:true " with result "range_response_count:0 size:7" took too long (373.171525ms) to execute
2020-07-25 16:11:00.148810 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (623.800419ms) to execute
2020-07-25 16:11:04.124692 W | etcdserver: read-only range request "key:"/registry/limitranges" range_end:"/registry/limitranget" count_only:true " with result "range_response_count:0 size:5" took too long (384.296483ms) to execute
2020-07-25 16:11:08.166348 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper" " with result "range_response_count:1 size:891" took too long (174.175898ms) to execute
2020-07-25 16:11:09.708769 W | etcdserver: request "header:<ID:830759651496533306 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12620 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533304 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (549.964037ms) to execute
2020-07-25 16:11:11.909639 W | etcdserver: read-only range request "key:"/registry/secrets" range_end:"/registry/secrett" count_only:true " with result "range_response_count:0 size:7" took too long (244.239759ms) to execute
2020-07-25 16:11:11.909745 W | etcdserver: read-only range request "key:"/registry/pods" range_end:"/registry/podt" count_only:true " with result "range_response_count:0 size:7" took too long (740.057717ms) to execute
2020-07-25 16:11:18.753856 W | wal: sync duration of 1.325619237s, expected less than 1s
2020-07-25 16:11:19.527266 W | etcdserver: read-only range request "key:"/registry/volumeattachments" range_end:"/registry/volumeattachmentt" count_only:true " with result "range_response_count:0 size:5" took too long (680.687111ms) to execute
2020-07-25 16:11:19.527380 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result "range_response_count:1 size:257" took too long (493.663946ms) to execute
2020-07-25 16:11:19.527397 W | etcdserver: read-only range request "key:"/registry/namespaces" range_end:"/registry/namespacet" count_only:true " with result "range_response_count:0 size:7" took too long (510.538333ms) to execute
2020-07-25 16:11:20.027430 W | etcdserver: request "header:<ID:830759651496533333 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12623 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533331 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (291.144001ms) to execute
2020-07-25 16:11:20.119397 W | etcdserver: read-only range request "key:"/registry/replicasets" range_end:"/registry/replicasett" count_only:true " with result "range_response_count:0 size:7" took too long (354.404828ms) to execute
2020-07-25 16:11:24.836558 W | etcdserver: read-only range request "key:"/registry/services/specs/ingress-nginx/ingress-nginx-controller" " with result "range_response_count:1 size:2279" took too long (154.188395ms) to execute
2020-07-25 16:11:24.836704 W | etcdserver: read-only range request "key:"/registry/deployments" range_end:"/registry/deploymentt" count_only:true " with result "range_response_count:0 size:7" took too long (334.048523ms) to execute
2020-07-25 16:11:28.081220 W | etcdserver: read-only range request "key:"/registry/ranges/serviceips" " with result "range_response_count:1 size:121265" took too long (377.223345ms) to execute
2020-07-25 16:11:29.479635 W | etcdserver: request "header:<ID:830759651496533359 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12627 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533357 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (191.172686ms) to execute
2020-07-25 16:11:38.282636 W | etcdserver: read-only range request "key:"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper" " with result "range_response_count:1 size:891" took too long (110.984672ms) to execute
2020-07-25 16:11:39.965381 W | etcdserver: request "header:<ID:830759651496533383 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:12630 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:68 lease:830759651496533381 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (575.297566ms) to execute
==> etcd [cbd525c9ef77] <==
2020-07-23 20:23:45.460130 W | etcdserver: read-only range request "key:"/registry/health" " with result "range_response_count:0 size:5" took too long (233.240375ms) to execute
2020-07-23 20:23:50.852236 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (264.658509ms) to execute
2020-07-23 20:23:55.936512 W | etcdserver: request "header:<ID:830759638434256316 > lease_revoke:id:0b87737d569b1592" with result "size:28" took too long (221.153676ms) to execute
2020-07-23 20:24:00.771704 W | etcdserver: request "header:<ID:830759638434256323 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:431 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256321 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (191.560249ms) to execute
2020-07-23 20:24:10.739573 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (156.301678ms) to execute
2020-07-23 20:24:17.737832 W | etcdserver: read-only range request "key:"/registry/health" " with result "error:context canceled" took too long (2.000474385s) to execute
WARNING: 2020/07/23 20:24:17 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-07-23 20:24:18.140064 W | etcdserver: read-only range request "key:"/registry/jobs/" range_end:"/registry/jobs0" limit:500 " with result "range_response_count:0 size:5" took too long (2.550812638s) to execute
2020-07-23 20:24:18.140338 W | etcdserver: request "header:<ID:830759638434256352 > lease_revoke:id:0b87737d569b15c1" with result "size:28" took too long (474.925635ms) to execute
2020-07-23 20:24:18.615134 W | etcdserver: read-only range request "key:"/registry/cronjobs/" range_end:"/registry/cronjobs0" limit:500 " with result "range_response_count:0 size:5" took too long (472.263916ms) to execute
2020-07-23 20:24:18.615222 W | etcdserver: request "header:<ID:830759638434256358 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/events/kube-system/kube-apiserver-kmaster-01.16247b68badf53fc" mod_revision:0 > success:<request_put:<key:"/registry/events/kube-system/kube-apiserver-kmaster-01.16247b68badf53fc" value_size:697 lease:830759638434256353 >> failure:<>>" with result "size:16" took too long (266.312314ms) to execute
2020-07-23 20:24:21.357235 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (256.028644ms) to execute
2020-07-23 20:24:31.276889 W | etcdserver: request "header:<ID:830759638434256385 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:440 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256383 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (174.521388ms) to execute
2020-07-23 20:24:41.303605 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (194.940627ms) to execute
2020-07-23 20:24:51.347267 W | etcdserver: request "header:<ID:830759638434256442 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:444 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256440 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (207.489697ms) to execute
2020-07-23 20:25:01.249780 W | etcdserver: request "header:<ID:830759638434256466 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:446 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256464 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (132.416251ms) to execute
2020-07-23 20:25:11.260235 W | etcdserver: request "header:<ID:830759638434256486 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:448 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256484 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (157.882696ms) to execute
2020-07-23 20:25:21.262661 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (157.345455ms) to execute
2020-07-23 20:25:21.262849 W | etcdserver: read-only range request "key:"/registry/poddisruptionbudgets" range_end:"/registry/poddisruptionbudgett" count_only:true " with result "range_response_count:0 size:5" took too long (154.490305ms) to execute
2020-07-23 20:25:31.247567 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (147.563233ms) to execute
2020-07-23 20:25:41.274605 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (153.527497ms) to execute
2020-07-23 20:25:46.742951 W | etcdserver: read-only range request "key:"/registry/validatingwebhookconfigurations" range_end:"/registry/validatingwebhookconfigurationt" count_only:true " with result "range_response_count:0 size:5" took too long (135.996516ms) to execute
2020-07-23 20:25:51.269538 W | etcdserver: request "header:<ID:830759638434256577 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:456 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256575 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (140.935377ms) to execute
2020-07-23 20:25:51.610732 W | etcdserver: read-only range request "key:"/registry/clusterroles" range_end:"/registry/clusterrolet" count_only:true " with result "range_response_count:0 size:7" took too long (103.643665ms) to execute
2020-07-23 20:26:01.287401 W | etcdserver: request "header:<ID:830759638434256598 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:458 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256596 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (173.962601ms) to execute
2020-07-23 20:26:11.271998 W | etcdserver: request "header:<ID:830759638434256617 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:460 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256615 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (154.665749ms) to execute
2020-07-23 20:26:21.274103 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (161.947687ms) to execute
2020-07-23 20:26:31.250512 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (147.622046ms) to execute
2020-07-23 20:26:41.278481 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (163.305918ms) to execute
2020-07-23 20:26:51.322147 W | etcdserver: request "header:<ID:830759638434256705 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:469 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256703 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (150.578556ms) to execute
2020-07-23 20:27:01.256761 W | etcdserver: request "header:<ID:830759638434256727 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:471 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256725 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (150.659014ms) to execute
2020-07-23 20:27:05.207497 W | etcdserver: read-only range request "key:"/registry/mutatingwebhookconfigurations" range_end:"/registry/mutatingwebhookconfigurationt" count_only:true " with result "range_response_count:0 size:5" took too long (186.867523ms) to execute
2020-07-23 20:27:11.267233 W | etcdserver: request "header:<ID:830759638434256751 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:473 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256749 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (148.915303ms) to execute
2020-07-23 20:27:16.909734 W | etcdserver: read-only range request "key:"/registry/serviceaccounts" range_end:"/registry/serviceaccountt" count_only:true " with result "range_response_count:0 size:7" took too long (279.965467ms) to execute
2020-07-23 20:27:21.269088 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (162.463032ms) to execute
2020-07-23 20:27:21.269388 W | etcdserver: read-only range request "key:"/registry/apiextensions.k8s.io/customresourcedefinitions" range_end:"/registry/apiextensions.k8s.io/customresourcedefinitiont" count_only:true " with result "range_response_count:0 size:5" took too long (165.484531ms) to execute
2020-07-23 20:27:31.278645 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (159.166439ms) to execute
2020-07-23 20:27:41.282055 W | etcdserver: request "header:<ID:830759638434256813 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:479 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256811 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (141.025002ms) to execute
2020-07-23 20:27:45.286720 W | etcdserver: read-only range request "key:"/registry/csinodes" range_end:"/registry/csinodet" count_only:true " with result "range_response_count:0 size:7" took too long (222.070195ms) to execute
2020-07-23 20:27:51.266823 W | etcdserver: request "header:<ID:830759638434256846 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:481 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256844 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (158.420071ms) to execute
2020-07-23 20:28:01.318660 W | etcdserver: request "header:<ID:830759638434256867 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:483 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256865 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (155.858307ms) to execute
2020-07-23 20:28:11.287203 W | etcdserver: request "header:<ID:830759638434256889 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:485 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256887 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (149.751183ms) to execute
2020-07-23 20:28:21.322989 W | etcdserver: request "header:<ID:830759638434256911 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:487 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256909 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (147.986327ms) to execute
2020-07-23 20:28:31.284503 W | etcdserver: request "header:<ID:830759638434256929 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:489 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256927 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (141.131645ms) to execute
2020-07-23 20:28:41.260593 W | etcdserver: request "header:<ID:830759638434256948 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:491 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434256946 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (151.027522ms) to execute
2020-07-23 20:28:51.279127 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (162.739734ms) to execute
2020-07-23 20:29:01.315184 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (215.797406ms) to execute
2020-07-23 20:29:11.274989 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (161.636671ms) to execute
2020-07-23 20:29:21.294235 W | etcdserver: request "header:<ID:830759638434257043 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:500 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434257041 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (141.139767ms) to execute
2020-07-23 20:29:21.294593 W | etcdserver: read-only range request "key:"/registry/services/specs" range_end:"/registry/services/spect" count_only:true " with result "range_response_count:0 size:7" took too long (123.051867ms) to execute
2020-07-23 20:29:26.344395 W | etcdserver: read-only range request "key:"/registry/events" range_end:"/registry/eventt" count_only:true " with result "range_response_count:0 size:7" took too long (189.792316ms) to execute
2020-07-23 20:29:31.379327 W | etcdserver: request "header:<ID:830759638434257068 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:502 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434257066 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (216.821507ms) to execute
2020-07-23 20:29:41.264013 W | etcdserver: request "header:<ID:830759638434257087 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:504 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434257085 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (131.561835ms) to execute
2020-07-23 20:29:51.258017 W | etcdserver: request "header:<ID:830759638434257116 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:506 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434257114 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (140.231421ms) to execute
2020-07-23 20:30:01.277140 W | etcdserver: request "header:<ID:830759638434257139 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:508 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434257137 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (157.274078ms) to execute
2020-07-23 20:30:01.277408 W | etcdserver: read-only range request "key:"/registry/clusterroles" range_end:"/registry/clusterrolet" count_only:true " with result "range_response_count:0 size:7" took too long (182.455082ms) to execute
2020-07-23 20:30:11.288572 W | etcdserver: request "header:<ID:830759638434257161 username:"kube-apiserver-etcd-client" auth_revision:1 > txn:<compare:<target:MOD key:"/registry/masterleases/192.168.1.60" mod_revision:510 > success:<request_put:<key:"/registry/masterleases/192.168.1.60" value_size:67 lease:830759638434257159 >> failure:<request_range:<key:"/registry/masterleases/192.168.1.60" > >>" with result "size:16" took too long (150.72765ms) to execute
2020-07-23 20:30:21.364080 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (231.935372ms) to execute
2020-07-23 20:30:31.349815 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:288" took too long (239.976943ms) to execute
2020-07-23 20:30:31.349874 W | etcdserver: read-only range request "key:"/registry/daemonsets" range_end:"/registry/daemonsett" count_only:true " with result "range_response_count:0 size:7" took too long (115.767492ms) to execute
==> kernel <==
16:12:24 up 15:03, 3 users, load average: 3.50, 3.07, 2.68
Linux kmaster-01 5.7.0-1-amd64 #1 SMP Debian 5.7.6-1 (2020-06-24) x86_64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux bullseye/sid"
==> kube-apiserver [3a1054cc3424] <==
Trace[338549137]: [530.220196ms] [506.012609ms] Object stored in database
I0723 20:23:11.563013 1 trace.go:116] Trace[1183838785]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-07-23 20:23:10.724862504 +0000 UTC m=+94.852725797) (total time: 838.094874ms):
Trace[1183838785]: [838.013973ms] [837.947572ms] Transaction committed
I0723 20:23:11.563115 1 trace.go:116] Trace[1934799711]: "Create" url:/api/v1/namespaces/kube-system/pods/storage-provisioner/binding,user-agent:kube-scheduler/v1.18.3 (linux/amd64) kubernetes/2e7996e/scheduler,client:192.168.1.60 (started: 2020-07-23 20:23:10.724384401 +0000 UTC m=+94.852247494) (total time: 838.705178ms):
Trace[1934799711]: [838.667177ms] [838.598177ms] Object stored in database
I0723 20:23:12.677784 1 trace.go:116] Trace[2060941112]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kube-scheduler/v1.18.3 (linux/amd64) kubernetes/2e7996e/scheduler,client:192.168.1.60 (started: 2020-07-23 20:23:11.564987193 +0000 UTC m=+95.692849986) (total time: 1.112756692s):
Trace[2060941112]: [1.112694691s] [1.11263149s] Object stored in database
I0723 20:23:12.680457 1 trace.go:116] Trace[1174277642]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-23 20:23:10.72164998 +0000 UTC m=+94.849512973) (total time: 1.958775925s):
Trace[1174277642]: [841.271897ms] [841.271897ms] initial value restored
Trace[1174277642]: [1.957164813s] [1.115892916s] Transaction prepared
I0723 20:23:13.271107 1 trace.go:116] Trace[2045275995]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-07-23 20:23:12.680234204 +0000 UTC m=+96.808098397) (total time: 590.83006ms):
Trace[2045275995]: [590.727159ms] [588.719644ms] Transaction committed
I0723 20:23:13.271417 1 trace.go:116] Trace[978133279]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-23 20:23:12.681162911 +0000 UTC m=+96.809025904) (total time: 590.205855ms):
Trace[978133279]: [590.134354ms] [590.124554ms] About to write a response
I0723 20:23:13.271468 1 trace.go:116] Trace[554012688]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner/status,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-23 20:23:12.680107003 +0000 UTC m=+96.807970196) (total time: 591.329963ms):
Trace[554012688]: [591.184362ms] [589.350348ms] Object stored in database
I0723 20:23:14.329629 1 trace.go:116] Trace[1407324416]: "List etcd3" key:/masterleases/,resourceVersion:0,limit:0,continue: (started: 2020-07-23 20:23:13.271811069 +0000 UTC m=+97.399674662) (total time: 1.057780861s):
Trace[1407324416]: [1.057780861s] [1.057780861s] END
I0723 20:23:14.330175 1 trace.go:116] Trace[626378985]: "Get" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-23 20:23:13.273714284 +0000 UTC m=+97.401577277) (total time: 1.05644575s):
Trace[626378985]: [1.056349549s] [1.056338549s] About to write a response
I0723 20:23:14.940937 1 trace.go:116] Trace[1742829764]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-23 20:23:14.132892613 +0000 UTC m=+98.260755606) (total time: 807.948631ms):
Trace[1742829764]: [807.82503ms] [807.73233ms] Object stored in database
I0723 20:23:14.941250 1 trace.go:116] Trace[35381833]: "Get" url:/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-23 20:23:14.330032633 +0000 UTC m=+98.457895926) (total time: 610.800711ms):
Trace[35381833]: [610.66591ms] [610.66111ms] About to write a response
I0723 20:23:14.941356 1 trace.go:116] Trace[1863605342]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-j7cd9,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-23 20:23:14.331768846 +0000 UTC m=+98.459631839) (total time: 609.485701ms):
Trace[1863605342]: [608.673895ms] [608.663695ms] About to write a response
I0723 20:23:14.941790 1 trace.go:116] Trace[166970800]: "List etcd3" key:/cronjobs,resourceVersion:,limit:500,continue: (started: 2020-07-23 20:23:14.330717538 +0000 UTC m=+98.458579931) (total time: 611.016612ms):
Trace[166970800]: [611.016612ms] [611.016612ms] END
I0723 20:23:14.944562 1 trace.go:116] Trace[883272284]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:cronjob-controller,client:192.168.1.60 (started: 2020-07-23 20:23:14.330673338 +0000 UTC m=+98.458539831) (total time: 613.790934ms):
Trace[883272284]: [613.188129ms] [613.167529ms] Listing from storage done
I0723 20:23:17.505023 1 trace.go:116] Trace[793152288]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-j7cd9,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-23 20:23:16.923095521 +0000 UTC m=+101.050958714) (total time: 581.879482ms):
Trace[793152288]: [581.879482ms] [581.869982ms] END
I0723 20:23:17.505065 1 trace.go:116] Trace[885751482]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-07-23 20:23:16.923114821 +0000 UTC m=+101.050978514) (total time: 581.905182ms):
Trace[885751482]: [581.905182ms] [581.874782ms] END
I0723 20:23:17.505221 1 trace.go:116] Trace[91190820]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:endpoint-controller,client:192.168.1.60 (started: 2020-07-23 20:23:16.92294752 +0000 UTC m=+101.050810813) (total time: 582.266384ms):
Trace[91190820]: [582.166184ms] [582.046483ms] Object stored in database
I0723 20:23:19.155764 1 trace.go:116] Trace[994766854]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-07-23 20:23:18.559931425 +0000 UTC m=+102.687795119) (total time: 595.788885ms):
Trace[994766854]: [595.669985ms] [594.853979ms] Transaction committed
I0723 20:23:19.155966 1 trace.go:116] Trace[913909783]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (started: 2020-07-23 20:23:18.560273628 +0000 UTC m=+102.688137721) (total time: 595.667485ms):
Trace[913909783]: [595.617385ms] [594.882279ms] Transaction committed
I0723 20:23:19.155972 1 trace.go:116] Trace[1535711266]: "GuaranteedUpdate etcd3" type:*apps.ReplicaSet (started: 2020-07-23 20:23:18.560227328 +0000 UTC m=+102.688090821) (total time: 595.700085ms):
Trace[1535711266]: [595.564984ms] [594.044873ms] Transaction committed
I0723 20:23:19.156002 1 trace.go:116] Trace[1581067331]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:endpoint-controller,client:192.168.1.60 (started: 2020-07-23 20:23:18.559770024 +0000 UTC m=+102.687633917) (total time: 596.199189ms):
Trace[1581067331]: [596.040388ms] [595.945487ms] Object stored in database
I0723 20:23:19.156065 1 trace.go:116] Trace[803624943]: "Update" url:/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/kube-dns-h6zw7,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:endpointslice-controller,client:192.168.1.60 (started: 2020-07-23 20:23:18.559966426 +0000 UTC m=+102.687829919) (total time: 596.070788ms):
Trace[803624943]: [596.017687ms] [595.778785ms] Object stored in database
I0723 20:23:19.156152 1 trace.go:116] Trace[2087737679]: "Update" url:/apis/apps/v1/namespaces/kube-system/replicasets/coredns-66bff467f8/status,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:replicaset-controller,client:192.168.1.60 (started: 2020-07-23 20:23:18.559984726 +0000 UTC m=+102.687847919) (total time: 596.132688ms):
Trace[2087737679]: [596.016988ms] [595.850487ms] Object stored in database
I0723 20:23:19.680927 1 trace.go:116] Trace[155439375]: "GuaranteedUpdate etcd3" type:*apps.Deployment (started: 2020-07-23 20:23:19.158885636 +0000 UTC m=+103.286749629) (total time: 521.999016ms):
Trace[155439375]: [521.869015ms] [520.388004ms] Transaction committed
I0723 20:23:19.681183 1 trace.go:116] Trace[1832089327]: "Update" url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:deployment-controller,client:192.168.1.60 (started: 2020-07-23 20:23:19.158671934 +0000 UTC m=+103.286534527) (total time: 522.48352ms):
Trace[1832089327]: [522.326219ms] [522.176618ms] Object stored in database
I0723 20:23:41.060437 1 trace.go:116] Trace[1722210216]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-23 20:23:40.525360298 +0000 UTC m=+124.653223691) (total time: 535.03598ms):
Trace[1722210216]: [535.003379ms] [532.846162ms] Transaction committed
I0723 20:24:18.141137 1 trace.go:116] Trace[476737735]: "List etcd3" key:/jobs,resourceVersion:,limit:500,continue: (started: 2020-07-23 20:24:15.077176328 +0000 UTC m=+159.205042921) (total time: 2.552825052s):
Trace[476737735]: [2.552825052s] [2.552825052s] END
I0723 20:24:18.141275 1 trace.go:116] Trace[346039653]: "List" url:/apis/batch/v1/jobs,user-agent:kube-controller-manager/v1.18.3 (linux/amd64) kubernetes/2e7996e/system:serviceaccount:kube-system:cronjob-controller,client:192.168.1.60 (started: 2020-07-23 20:24:15.077002527 +0000 UTC m=+159.204868020) (total time: 2.553152654s):
Trace[346039653]: [2.553077854s] [2.553027054s] Listing from storage done
I0723 20:24:18.142561 1 trace.go:116] Trace[30243856]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-23 20:24:16.266878174 +0000 UTC m=+160.394744267) (total time: 1.364569516s):
Trace[30243856]: [1.364514016s] [1.364323414s] Object stored in database
==> kube-apiserver [793df161b222] <==
Trace[790045985]: [589.071443ms] [587.291434ms] Transaction committed
I0725 16:09:35.276585 1 trace.go:116] Trace[1974460685]: "List etcd3" key:/pods/kube-system,resourceVersion:,limit:0,continue: (started: 2020-07-25 16:09:34.631803002 +0000 UTC m=+53908.734879388) (total time: 644.74623ms):
Trace[1974460685]: [644.74623ms] [644.74623ms] END
I0725 16:09:35.277305 1 trace.go:116] Trace[843745478]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:minikube/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.1.60 (started: 2020-07-25 16:09:34.631759202 +0000 UTC m=+53908.734835688) (total time: 645.518034ms):
Trace[843745478]: [644.854631ms] [644.820931ms] Listing from storage done
I0725 16:09:41.676800 1 trace.go:116] Trace[1171980673]: "List etcd3" key:/pods/kube-system,resourceVersion:,limit:0,continue: (started: 2020-07-25 16:09:41.132938079 +0000 UTC m=+53915.236014565) (total time: 543.824809ms):
Trace[1171980673]: [543.824809ms] [543.824809ms] END
I0725 16:09:41.677640 1 trace.go:116] Trace[1019200564]: "List" url:/api/v1/namespaces/kube-system/pods,user-agent:minikube/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.1.60 (started: 2020-07-25 16:09:41.132888179 +0000 UTC m=+53915.235965365) (total time: 544.649913ms):
Trace[1019200564]: [543.945709ms] [543.907109ms] Listing from storage done
I0725 16:09:51.710440 1 trace.go:116] Trace[1412758892]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-07-25 16:09:51.184923889 +0000 UTC m=+53925.288000675) (total time: 525.470313ms):
Trace[1412758892]: [525.421613ms] [519.938885ms] Transaction committed
I0725 16:09:51.710688 1 trace.go:116] Trace[1860338656]: "Patch" url:/api/v1/namespaces/default/events/liveness-exec.1624bd8a14d92603,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-25 16:09:51.184803788 +0000 UTC m=+53925.287881074) (total time: 525.845015ms):
Trace[1860338656]: [525.700915ms] [522.8187ms] Object stored in database
I0725 16:10:09.839743 1 trace.go:116] Trace[47435011]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:10:09.034818247 +0000 UTC m=+53943.137894833) (total time: 804.879655ms):
Trace[47435011]: [804.829455ms] [802.275442ms] Transaction committed
I0725 16:10:19.718815 1 trace.go:116] Trace[1188135304]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-25 16:10:19.031315848 +0000 UTC m=+53953.134392834) (total time: 687.457048ms):
Trace[1188135304]: [687.400848ms] [687.386048ms] About to write a response
I0725 16:10:20.333027 1 trace.go:116] Trace[1843444984]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:10:19.721263709 +0000 UTC m=+53953.824340195) (total time: 611.731358ms):
Trace[1843444984]: [611.703558ms] [609.923149ms] Transaction committed
I0725 16:10:29.576491 1 trace.go:116] Trace[1072215909]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:10:29.03529538 +0000 UTC m=+53963.138372066) (total time: 541.151993ms):
Trace[1072215909]: [541.122193ms] [539.241583ms] Transaction committed
I0725 16:10:34.569630 1 trace.go:116] Trace[72785106]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-25 16:10:33.946284224 +0000 UTC m=+53968.049362810) (total time: 623.306817ms):
Trace[72785106]: [623.247616ms] [623.234316ms] About to write a response
I0725 16:10:37.412622 1 trace.go:116] Trace[735837575]: "GuaranteedUpdate etcd3" type:*coordination.Lease (started: 2020-07-25 16:10:36.870005712 +0000 UTC m=+53970.973082298) (total time: 542.575399ms):
Trace[735837575]: [542.551999ms] [541.965296ms] Transaction committed
I0725 16:10:37.412758 1 trace.go:116] Trace[566422580]: "Update" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kmaster-01,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-25 16:10:36.869852811 +0000 UTC m=+53970.972929197) (total time: 542.874001ms):
Trace[566422580]: [542.818601ms] [542.718401ms] Object stored in database
I0725 16:10:39.830607 1 trace.go:116] Trace[271180763]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:10:39.03629799 +0000 UTC m=+53973.139376276) (total time: 794.234598ms):
Trace[271180763]: [794.179098ms] [792.348488ms] Transaction committed
I0725 16:10:49.800398 1 trace.go:116] Trace[864639360]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:10:49.092222577 +0000 UTC m=+53983.195300063) (total time: 708.129953ms):
Trace[864639360]: [708.096453ms] [704.419734ms] Transaction committed
I0725 16:11:00.149766 1 trace.go:116] Trace[1339950083]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-25 16:10:59.524649599 +0000 UTC m=+53993.627726085) (total time: 625.072924ms):
Trace[1339950083]: [625.015024ms] [625.003024ms] About to write a response
I0725 16:11:09.709593 1 trace.go:116] Trace[1651938364]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:11:09.036289565 +0000 UTC m=+54003.139366151) (total time: 673.259572ms):
Trace[1651938364]: [673.229772ms] [670.356658ms] Transaction committed
I0725 16:11:39.967069 1 trace.go:116] Trace[1687246926]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:11:39.037997594 +0000 UTC m=+54033.141074580) (total time: 928.882289ms):
Trace[1687246926]: [928.707288ms] [926.803679ms] Transaction committed
I0725 16:11:50.277095 1 trace.go:116] Trace[1520584993]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-25 16:11:49.034555238 +0000 UTC m=+54043.137631524) (total time: 1.242494706s):
Trace[1520584993]: [1.242431906s] [1.242420006s] About to write a response
I0725 16:11:51.134422 1 trace.go:116] Trace[465474951]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:11:50.279815359 +0000 UTC m=+54044.382891945) (total time: 854.564406ms):
Trace[465474951]: [854.534506ms] [852.634296ms] Transaction committed
I0725 16:11:59.953075 1 trace.go:116] Trace[1858445752]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-25 16:11:59.411869641 +0000 UTC m=+54053.514946027) (total time: 541.16059ms):
Trace[1858445752]: [541.100689ms] [541.089489ms] About to write a response
I0725 16:12:20.040919 1 trace.go:116] Trace[1394175498]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:12:19.285409588 +0000 UTC m=+54073.388486174) (total time: 755.468494ms):
Trace[1394175498]: [755.438494ms] [753.558284ms] Transaction committed
I0725 16:12:34.335485 1 trace.go:116] Trace[1745086066]: "Get" url:/api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx,user-agent:nginx-ingress-controller/v0.34.1 (linux/amd64) ingress-nginx/v20200715-ingress-nginx-2.11.0-8-gda5fa45e2,client:172.17.0.11 (started: 2020-07-25 16:12:33.816985987 +0000 UTC m=+54087.920063373) (total time: 518.456772ms):
Trace[1745086066]: [518.371871ms] [518.357571ms] About to write a response
I0725 16:12:34.960062 1 trace.go:116] Trace[1480872497]: "GuaranteedUpdate etcd3" type:*core.ConfigMap (started: 2020-07-25 16:12:34.336926467 +0000 UTC m=+54088.440002253) (total time: 623.088111ms):
Trace[1480872497]: [623.037711ms] [622.605809ms] Transaction committed
I0725 16:12:34.960210 1 trace.go:116] Trace[825892482]: "GuaranteedUpdate etcd3" type:*core.Event (started: 2020-07-25 16:12:34.24060577 +0000 UTC m=+54088.343682356) (total time: 719.560009ms):
Trace[825892482]: [719.315407ms] [622.062106ms] Transaction committed
I0725 16:12:34.960244 1 trace.go:116] Trace[499852758]: "Update" url:/api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx,user-agent:nginx-ingress-controller/v0.34.1 (linux/amd64) ingress-nginx/v20200715-ingress-nginx-2.11.0-8-gda5fa45e2,client:172.17.0.11 (started: 2020-07-25 16:12:34.336732866 +0000 UTC m=+54088.439810152) (total time: 623.471213ms):
Trace[499852758]: [623.377512ms] [623.230212ms] Object stored in database
I0725 16:12:34.960347 1 trace.go:116] Trace[66879150]: "Patch" url:/api/v1/namespaces/kube-system/events/etcd-kmaster-01.16250a50982d91c1,user-agent:kubelet/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:192.168.1.60 (started: 2020-07-25 16:12:34.24050737 +0000 UTC m=+54088.343583756) (total time: 719.808609ms):
Trace[66879150]: [95.293491ms] [95.247091ms] About to apply patch
Trace[66879150]: [719.728709ms] [624.006216ms] Object stored in database
I0725 16:12:35.468747 1 trace.go:116] Trace[217285850]: "Get" url:/api/v1/namespaces/kube-system,user-agent:kube-apiserver/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:::1 (started: 2020-07-25 16:12:34.580611322 +0000 UTC m=+54088.683687808) (total time: 888.093477ms):
Trace[217285850]: [888.041577ms] [888.028277ms] About to write a response
I0725 16:12:39.921879 1 trace.go:116] Trace[1605543888]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (started: 2020-07-25 16:12:39.039847804 +0000 UTC m=+54093.142924490) (total time: 881.954045ms):
Trace[1605543888]: [881.729644ms] [879.548733ms] Transaction committed
==> kube-controller-manager [aead8f81b956] <==
I0724 10:32:24.574295 1 shared_informer.go:230] Caches are synced for garbage collector
I0724 10:32:24.574334 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0724 10:32:24.579610 1 shared_informer.go:230] Caches are synced for garbage collector
I0724 10:32:24.625394 1 shared_informer.go:230] Caches are synced for resource quota
I0724 10:34:13.579672 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"0aa71dce-5aa0-4247-b91f-3f89fa2b1250", APIVersion:"apps/v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-dc6947fbf to 1
I0724 10:34:14.174259 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"641", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:14.673110 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:14.673568 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"8d79ccd5-ac8a-40e8-a9bf-9ea24c6f4dd4", APIVersion:"apps/v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-6dbb54fd95 to 1
I0724 10:34:15.139149 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6dbb54fd95", UID:"c2546a7b-0c72-4b9f-8db1-f7a455aa12c6", APIVersion:"apps/v1", ResourceVersion:"648", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:15.140720 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:15.140832 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:15.522974 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95" failed with pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:15.525060 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:15.525132 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:15.981868 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95" failed with pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:15.982155 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:15.982056 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6dbb54fd95", UID:"c2546a7b-0c72-4b9f-8db1-f7a455aa12c6", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:15.982315 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:16.054944 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95" failed with pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:16.055219 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6dbb54fd95", UID:"c2546a7b-0c72-4b9f-8db1-f7a455aa12c6", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:16.055874 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:16.055876 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:16.412710 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:16.412712 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:16.413868 1 replica_set.go:535] sync "kubernetes-dashboard/kubernetes-dashboard-6dbb54fd95" failed with pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:16.413878 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6dbb54fd95", UID:"c2546a7b-0c72-4b9f-8db1-f7a455aa12c6", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6dbb54fd95-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0724 10:34:16.479691 1 replica_set.go:535] sync "kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf" failed with pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:16.479715 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-dc6947fbf-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0724 10:34:17.780859 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-dc6947fbf", UID:"e8cc818f-48db-4e7f-984f-c29c243aeb8a", APIVersion:"apps/v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-dc6947fbf-9nvf8
I0724 10:34:17.780953 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6dbb54fd95", UID:"c2546a7b-0c72-4b9f-8db1-f7a455aa12c6", APIVersion:"apps/v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6dbb54fd95-f4pq9
I0724 11:32:22.915959 1 cleaner.go:167] Cleaning CSR "csr-vcccx" as it is more than 1h0m0s old and approved.
I0724 15:42:35.338995 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"lfs158", Name:"nginx", UID:"4f11f167-814a-4313-8f0c-f864e4b7f48d", APIVersion:"apps/v1", ResourceVersion:"4556", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-745b4df97d to 1
I0724 15:42:35.663803 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"lfs158", Name:"nginx-745b4df97d", UID:"f276b4fa-8e7b-46f2-b7d3-cb8a12123aea", APIVersion:"apps/v1", ResourceVersion:"4557", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-745b4df97d-p2nr9
I0724 16:10:23.841524 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"webserver", UID:"37904e51-f879-4989-bf73-c19cb7c31f99", APIVersion:"apps/v1", ResourceVersion:"4923", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-5d58b6b749 to 3
I0724 16:10:24.258983 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"webserver-5d58b6b749", UID:"7f418b52-83d9-4198-a9a2-495b5b4be5f9", APIVersion:"apps/v1", ResourceVersion:"4924", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-5d58b6b749-zhdxp
I0724 16:10:24.605456 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"webserver-5d58b6b749", UID:"7f418b52-83d9-4198-a9a2-495b5b4be5f9", APIVersion:"apps/v1", ResourceVersion:"4924", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-5d58b6b749-hsl9j
I0724 16:10:24.605906 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"webserver-5d58b6b749", UID:"7f418b52-83d9-4198-a9a2-495b5b4be5f9", APIVersion:"apps/v1", ResourceVersion:"4924", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-5d58b6b749-7hvz7
I0724 16:23:22.448929 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"webserver", UID:"70ae6836-95f3-49e9-92f1-69dd68af7279", APIVersion:"apps/v1", ResourceVersion:"5142", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set webserver-97499b967 to 3
I0724 16:23:22.782203 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"webserver-97499b967", UID:"d271cccc-4732-48bd-b5dc-db2e18582d00", APIVersion:"apps/v1", ResourceVersion:"5143", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-97499b967-hr7wk
I0724 16:23:23.108348 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"webserver-97499b967", UID:"d271cccc-4732-48bd-b5dc-db2e18582d00", APIVersion:"apps/v1", ResourceVersion:"5143", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-97499b967-fhfqz
I0724 16:23:23.108395 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"webserver-97499b967", UID:"d271cccc-4732-48bd-b5dc-db2e18582d00", APIVersion:"apps/v1", ResourceVersion:"5143", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: webserver-97499b967-mt6kj
I0725 08:11:29.990527 1 cleaner.go:167] Cleaning CSR "student-csr" as it is more than 1h0m0s old and approved.
I0725 15:32:50.348922 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"ingress-nginx-controller", UID:"fb75eaba-2b12-493b-bb03-4e8a88dc8d55", APIVersion:"apps/v1", ResourceVersion:"11881", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-69ccf5d9d8 to 1
I0725 15:32:50.898712 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"ingress-nginx-controller-69ccf5d9d8", UID:"21e57e76-8f1c-4b5d-b0c6-468a1d2020b9", APIVersion:"apps/v1", ResourceVersion:"11882", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-69ccf5d9d8-7qvfn
I0725 15:32:53.333329 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"8c9fa3cb-ad71-4c1b-b71a-ad2cfd49c25b", APIVersion:"batch/v1", ResourceVersion:"11899", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-4bnt4
I0725 15:32:55.574585 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"70ffe6a7-7c60-42fc-b6b4-db679324f954", APIVersion:"batch/v1", ResourceVersion:"11906", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-9cmsj
I0725 15:33:36.308140 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-create", UID:"8c9fa3cb-ad71-4c1b-b71a-ad2cfd49c25b", APIVersion:"batch/v1", ResourceVersion:"11910", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
I0725 15:33:43.845781 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"ingress-nginx-admission-patch", UID:"70ffe6a7-7c60-42fc-b6b4-db679324f954", APIVersion:"batch/v1", ResourceVersion:"11914", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
I0725 16:01:57.468094 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"30c2736b-93c9-486f-8c2d-ccd1af6022ff", APIVersion:"apps/v1", ResourceVersion:"12370", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-c96557986 to 1
I0725 16:01:58.433293 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-c96557986", UID:"91500876-d8f7-457f-b96b-9fad68c3a405", APIVersion:"apps/v1", ResourceVersion:"12371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-c96557986-grglz
E0725 16:02:02.035461 1 job_controller.go:793] pods "ingress-nginx-admission-create-" is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount "ingress-nginx-admission" not found
E0725 16:02:02.035528 1 job_controller.go:398] Error syncing job: pods "ingress-nginx-admission-create-" is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount "ingress-nginx-admission" not found
I0725 16:02:02.035703 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fd9cdda2-6f5d-4a7b-93bd-b72f44b06846", APIVersion:"batch/v1", ResourceVersion:"12388", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "ingress-nginx-admission-create-" is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount "ingress-nginx-admission" not found
E0725 16:02:02.960810 1 job_controller.go:793] pods "ingress-nginx-admission-patch-" is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount "ingress-nginx-admission" not found
E0725 16:02:02.960878 1 job_controller.go:398] Error syncing job: pods "ingress-nginx-admission-patch-" is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount "ingress-nginx-admission" not found
I0725 16:02:02.961010 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b3b4f673-fa53-425d-913c-c078ffc55330", APIVersion:"batch/v1", ResourceVersion:"12391", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "ingress-nginx-admission-patch-" is forbidden: error looking up service account ingress-nginx/ingress-nginx-admission: serviceaccount "ingress-nginx-admission" not found
I0725 16:02:12.043015 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fd9cdda2-6f5d-4a7b-93bd-b72f44b06846", APIVersion:"batch/v1", ResourceVersion:"12388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-ltm76
I0725 16:02:13.378310 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b3b4f673-fa53-425d-913c-c078ffc55330", APIVersion:"batch/v1", ResourceVersion:"12391", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-pnqr4
I0725 16:02:49.649220 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"fd9cdda2-6f5d-4a7b-93bd-b72f44b06846", APIVersion:"batch/v1", ResourceVersion:"12407", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
I0725 16:03:02.982779 1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b3b4f673-fa53-425d-913c-c078ffc55330", APIVersion:"batch/v1", ResourceVersion:"12414", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
==> kube-controller-manager [d165bc7d43f8] <==
W0723 20:22:39.060271 1 controllermanager.go:525] Skipping "nodeipam"
I0723 20:22:39.060279 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving
I0723 20:22:39.383016 1 controllermanager.go:533] Started "replicationcontroller"
I0723 20:22:39.383135 1 replica_set.go:181] Starting replicationcontroller controller
I0723 20:22:39.383169 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController
I0723 20:22:40.207674 1 controllermanager.go:533] Started "deployment"
I0723 20:22:40.207715 1 deployment_controller.go:153] Starting deployment controller
I0723 20:22:40.207735 1 shared_informer.go:223] Waiting for caches to sync for deployment
I0723 20:22:41.158524 1 controllermanager.go:533] Started "cronjob"
I0723 20:22:41.158678 1 cronjob_controller.go:97] Starting CronJob Manager
I0723 20:22:41.941965 1 controllermanager.go:533] Started "attachdetach"
I0723 20:22:41.941998 1 attach_detach_controller.go:338] Starting attach detach controller
I0723 20:22:41.942019 1 shared_informer.go:223] Waiting for caches to sync for attach detach
I0723 20:22:41.942271 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0723 20:22:41.946569 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0723 20:22:41.958328 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0723 20:22:41.989864 1 shared_informer.go:230] Caches are synced for HPA
I0723 20:22:42.005654 1 shared_informer.go:230] Caches are synced for job
I0723 20:22:42.006393 1 shared_informer.go:230] Caches are synced for PVC protection
I0723 20:22:42.008395 1 shared_informer.go:230] Caches are synced for deployment
I0723 20:22:42.014473 1 shared_informer.go:230] Caches are synced for endpoint
I0723 20:22:42.030822 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0723 20:22:42.033658 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0723 20:22:42.049147 1 shared_informer.go:230] Caches are synced for PV protection
I0723 20:22:42.049457 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0723 20:22:42.154353 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0723 20:22:42.160535 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
W0723 20:22:42.194196 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kmaster-01" does not exist
I0723 20:22:42.214281 1 shared_informer.go:230] Caches are synced for daemon sets
I0723 20:22:42.221779 1 shared_informer.go:230] Caches are synced for GC
I0723 20:22:42.242374 1 shared_informer.go:230] Caches are synced for attach detach
I0723 20:22:42.256790 1 shared_informer.go:230] Caches are synced for taint
I0723 20:22:42.256873 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
I0723 20:22:42.256893 1 taint_manager.go:187] Starting NoExecuteTaintManager
W0723 20:22:42.256945 1 node_lifecycle_controller.go:1048] Missing timestamp for Node kmaster-01. Assuming now as a timestamp.
I0723 20:22:42.257030 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal.
I0723 20:22:42.257063 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kmaster-01", UID:"4349d478-29d6-43ef-83b7-c66689fea9e4", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kmaster-01 event: Registered Node kmaster-01 in Controller
I0723 20:22:42.274238 1 shared_informer.go:230] Caches are synced for TTL
I0723 20:22:42.328648 1 shared_informer.go:230] Caches are synced for persistent volume
I0723 20:22:42.329363 1 shared_informer.go:230] Caches are synced for expand
I0723 20:22:42.425821 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"97a5bc74-1e77-47f3-9e60-2845f0c716ba", APIVersion:"apps/v1", ResourceVersion:"225", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0723 20:22:42.483624 1 shared_informer.go:230] Caches are synced for ReplicationController
I0723 20:22:42.512725 1 shared_informer.go:230] Caches are synced for namespace
I0723 20:22:42.532354 1 shared_informer.go:230] Caches are synced for service account
I0723 20:22:42.545688 1 shared_informer.go:230] Caches are synced for disruption
I0723 20:22:42.545729 1 disruption.go:339] Sending events to api server.
I0723 20:22:42.547091 1 shared_informer.go:230] Caches are synced for garbage collector
I0723 20:22:42.592150 1 shared_informer.go:230] Caches are synced for resource quota
I0723 20:22:42.630764 1 shared_informer.go:230] Caches are synced for stateful set
I0723 20:22:42.640529 1 shared_informer.go:230] Caches are synced for garbage collector
I0723 20:22:42.640559 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0723 20:22:42.642579 1 shared_informer.go:230] Caches are synced for resource quota
E0723 20:22:42.894142 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0723 20:22:42.894922 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"998d4cea-4d22-47a1-9cdd-9d474180efd6", APIVersion:"apps/v1", ResourceVersion:"319", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-j7cd9
I0723 20:22:42.895219 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"c56c55c9-5257-4aa7-a1f0-22630b5b10d4", APIVersion:"apps/v1", ResourceVersion:"236", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-767vp
I0723 20:22:43.512209 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"998d4cea-4d22-47a1-9cdd-9d474180efd6", APIVersion:"apps/v1", ResourceVersion:"319", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-4wwxt
E0723 20:22:45.281345 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0723 20:22:46.190791 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"c56c55c9-5257-4aa7-a1f0-22630b5b10d4", ResourceVersion:"236", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731132538, loc:(*time.Location)(0x6d09200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000f45fc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000f45fe0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00010c500), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000d24880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00010c520), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00010c580), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00010c5e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ce93b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b58b68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001dc700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00025e698)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000b58bb8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0723 20:23:02.436075 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"97a5bc74-1e77-47f3-9e60-2845f0c716ba", APIVersion:"apps/v1", ResourceVersion:"375", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
I0723 20:23:04.658021 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"998d4cea-4d22-47a1-9cdd-9d474180efd6", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-j7cd9
==> kube-proxy [73f3f8e3ca78] <==
W0723 20:23:04.153519 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0723 20:23:05.557695 1 node.go:136] Successfully retrieved node IP: 192.168.1.60
I0723 20:23:05.557749 1 server_others.go:186] Using iptables Proxier.
W0723 20:23:05.557766 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0723 20:23:05.557779 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0723 20:23:05.559723 1 server.go:583] Version: v1.18.3
I0723 20:23:05.560625 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0723 20:23:05.562021 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0723 20:23:05.562540 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0723 20:23:05.563300 1 config.go:315] Starting service config controller
I0723 20:23:05.563603 1 shared_informer.go:223] Waiting for caches to sync for service config
I0723 20:23:05.565540 1 config.go:133] Starting endpoints config controller
I0723 20:23:05.567247 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0723 20:23:05.664344 1 shared_informer.go:230] Caches are synced for service config
I0723 20:23:05.667734 1 shared_informer.go:230] Caches are synced for endpoints config
==> kube-proxy [e5e9e87eee38] <==
W0724 10:33:12.586586 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0724 10:33:12.965958 1 node.go:136] Successfully retrieved node IP: 192.168.1.60
I0724 10:33:12.966010 1 server_others.go:186] Using iptables Proxier.
W0724 10:33:12.966023 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I0724 10:33:12.966031 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I0724 10:33:12.981640 1 server.go:583] Version: v1.18.3
I0724 10:33:13.019127 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0724 10:33:13.019264 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0724 10:33:13.019365 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0724 10:33:13.048897 1 config.go:315] Starting service config controller
I0724 10:33:13.048978 1 shared_informer.go:223] Waiting for caches to sync for service config
I0724 10:33:13.048979 1 config.go:133] Starting endpoints config controller
I0724 10:33:13.049011 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0724 10:33:13.149319 1 shared_informer.go:230] Caches are synced for service config
I0724 10:33:13.149348 1 shared_informer.go:230] Caches are synced for endpoints config
==> kube-scheduler [b43e1c0c3d5f] <==
I0723 20:21:37.398974 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0723 20:21:38.245854 1 serving.go:313] Generated self-signed cert in-memory
W0723 20:21:41.750382 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0723 20:21:41.750757 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0723 20:21:41.750762 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0723 20:21:41.750798 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0723 20:21:41.789216 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0723 20:21:41.789270 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0723 20:21:41.792969 1 authorization.go:47] Authorization is disabled
W0723 20:21:41.793012 1 authentication.go:40] Authentication is disabled
I0723 20:21:41.793032 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0723 20:21:41.796787 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0723 20:21:41.796838 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0723 20:21:41.799085 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0723 20:21:41.799296 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0723 20:21:41.802928 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0723 20:21:41.803787 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:41.804172 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0723 20:21:41.805439 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0723 20:21:41.806080 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0723 20:21:41.805615 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:41.805885 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0723 20:21:41.805917 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0723 20:21:41.806362 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0723 20:21:42.618075 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:42.677628 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0723 20:21:42.757707 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0723 20:21:42.785052 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0723 20:21:43.041454 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0723 20:21:43.078842 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0723 20:21:43.106609 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0723 20:21:43.297532 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0723 20:21:43.337604 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:44.943321 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0723 20:21:45.058074 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0723 20:21:45.131755 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0723 20:21:45.362895 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:45.385073 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0723 20:21:45.449835 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0723 20:21:45.467655 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:45.795676 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0723 20:21:46.213137 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0723 20:21:48.521801 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0723 20:21:49.299364 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0723 20:21:49.371107 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0723 20:21:49.790122 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:50.706387 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0723 20:21:50.909848 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:51.535902 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0723 20:21:51.668220 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0723 20:21:52.152555 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0723 20:21:55.849602 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0723 20:21:57.314466 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0723 20:21:57.540332 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0723 20:21:58.474767 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0723 20:21:59.719819 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0723 20:22:01.038404 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0723 20:22:02.127219 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 20:22:03.396606 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0723 20:22:18.901233 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [e86ee2dbbb76] <==
I0724 10:31:58.059027 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0724 10:31:58.059118 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0724 10:31:58.576851 1 serving.go:313] Generated self-signed cert in-memory
W0724 10:31:59.749465 1 authentication.go:297] Error looking up in-cluster authentication configuration: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: dial tcp 192.168.1.60:8443: connect: connection refused
W0724 10:31:59.749499 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0724 10:31:59.749509 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0724 10:31:59.850573 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0724 10:31:59.850606 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0724 10:31:59.902834 1 authorization.go:47] Authorization is disabled
W0724 10:31:59.902867 1 authentication.go:40] Authentication is disabled
I0724 10:31:59.902882 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0724 10:31:59.924112 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0724 10:31:59.924144 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0724 10:31:59.958535 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0724 10:31:59.958651 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0724 10:31:59.985151 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985177 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985173 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985199 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985382 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985429 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985151 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985671 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:31:59.985812 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:00.794031 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: Get https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:00.852838 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:00.985082 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: Get https://control-plane.minikube.internal:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:01.071259 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:01.117463 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:01.446172 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: Get https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:01.492952 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: Get https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:01.517472 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: Get https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:01.570149 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: Get https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: dial tcp 192.168.1.60:8443: connect: connection refused
E0724 10:32:08.464549 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0724 10:32:08.464830 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0724 10:32:08.464845 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0724 10:32:08.464993 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0724 10:32:08.465012 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0724 10:32:08.465167 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0724 10:32:08.465200 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0724 10:32:08.465276 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0724 10:32:08.470993 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0724 10:32:14.424570 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Wed 2020-07-15 13:23:20 UTC, end at Sat 2020-07-25 16:14:58 UTC. --
Jul 25 16:13:28 kmaster-01 kubelet[1847]: E0725 16:13:28.265100 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_327 (07f55f3c7f216168f713dc8548d99ee3f045ea50b7645269d4cf92eb82254706): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:28 kmaster-01 kubelet[1847]: E0725 16:13:28.265127 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_327 (07f55f3c7f216168f713dc8548d99ee3f045ea50b7645269d4cf92eb82254706): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:28 kmaster-01 kubelet[1847]: E0725 16:13:28.265239 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_327 (07f55f3c7f216168f713dc8548d99ee3f045ea50b7645269d4cf92eb82254706): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:13:32 kmaster-01 kubelet[1847]: E0725 16:13:32.791451 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "8d5352d90b0626a9df42a02991910a751bb7f9ab874bdbfb7c117072b089a3b2" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: 8d5352d90b0626a9df42a02991910a751bb7f9ab874bdbfb7c117072b089a3b2
Jul 25 16:13:34 kmaster-01 kubelet[1847]: W0725 16:13:34.124861 1847 pod_container_deletor.go:77] Container "8d5352d90b0626a9df42a02991910a751bb7f9ab874bdbfb7c117072b089a3b2" not found in pod's containers
Jul 25 16:13:35 kmaster-01 kubelet[1847]: I0725 16:13:35.879990 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:13:35 kmaster-01 kubelet[1847]: E0725 16:13:35.880460 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:13:43 kmaster-01 kubelet[1847]: E0725 16:13:43.972833 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_328 (e3df3d6a4e463c847b72b3cfa128a0e952ae231e9e359430bfc45e0bc02883ff): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:43 kmaster-01 kubelet[1847]: E0725 16:13:43.972896 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_328 (e3df3d6a4e463c847b72b3cfa128a0e952ae231e9e359430bfc45e0bc02883ff): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:43 kmaster-01 kubelet[1847]: E0725 16:13:43.972923 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_328 (e3df3d6a4e463c847b72b3cfa128a0e952ae231e9e359430bfc45e0bc02883ff): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:43 kmaster-01 kubelet[1847]: E0725 16:13:43.973018 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_328 (e3df3d6a4e463c847b72b3cfa128a0e952ae231e9e359430bfc45e0bc02883ff): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:13:48 kmaster-01 kubelet[1847]: I0725 16:13:47.879873 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:13:48 kmaster-01 kubelet[1847]: E0725 16:13:47.880192 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:13:48 kmaster-01 kubelet[1847]: E0725 16:13:48.363244 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "b86c36dd68ecf4489ea488b1b28195d92deacfb2c8436bd16309ae3472095909" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: b86c36dd68ecf4489ea488b1b28195d92deacfb2c8436bd16309ae3472095909
Jul 25 16:13:50 kmaster-01 kubelet[1847]: W0725 16:13:50.548779 1847 pod_container_deletor.go:77] Container "b86c36dd68ecf4489ea488b1b28195d92deacfb2c8436bd16309ae3472095909" not found in pod's containers
Jul 25 16:13:59 kmaster-01 kubelet[1847]: E0725 16:13:59.173601 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_329 (e379b17786b2abe5196d756db75e1199be57858e0de5b9ccb535e4f32f16eee0): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:59 kmaster-01 kubelet[1847]: E0725 16:13:59.173670 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_329 (e379b17786b2abe5196d756db75e1199be57858e0de5b9ccb535e4f32f16eee0): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:59 kmaster-01 kubelet[1847]: E0725 16:13:59.173695 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_329 (e379b17786b2abe5196d756db75e1199be57858e0de5b9ccb535e4f32f16eee0): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:13:59 kmaster-01 kubelet[1847]: E0725 16:13:59.173772 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_329 (e379b17786b2abe5196d756db75e1199be57858e0de5b9ccb535e4f32f16eee0): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:14:02 kmaster-01 kubelet[1847]: I0725 16:14:02.879702 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:14:02 kmaster-01 kubelet[1847]: E0725 16:14:02.880022 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:14:03 kmaster-01 kubelet[1847]: E0725 16:14:03.787226 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "d8a380324e411dcc857606e9f31366a7bd2c94cce856cf21411ddaee7a8fa636" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: d8a380324e411dcc857606e9f31366a7bd2c94cce856cf21411ddaee7a8fa636
Jul 25 16:14:05 kmaster-01 kubelet[1847]: W0725 16:14:05.389767 1847 pod_container_deletor.go:77] Container "d8a380324e411dcc857606e9f31366a7bd2c94cce856cf21411ddaee7a8fa636" not found in pod's containers
Jul 25 16:14:12 kmaster-01 kubelet[1847]: E0725 16:14:12.396191 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_330 (8db08b9760e34b2fe05ec9226598e23fabe7676fc78a8480c6aac2dfddbc2585): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:12 kmaster-01 kubelet[1847]: E0725 16:14:12.396253 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_330 (8db08b9760e34b2fe05ec9226598e23fabe7676fc78a8480c6aac2dfddbc2585): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:12 kmaster-01 kubelet[1847]: E0725 16:14:12.396279 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_330 (8db08b9760e34b2fe05ec9226598e23fabe7676fc78a8480c6aac2dfddbc2585): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:12 kmaster-01 kubelet[1847]: E0725 16:14:12.396371 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_330 (8db08b9760e34b2fe05ec9226598e23fabe7676fc78a8480c6aac2dfddbc2585): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:14:12 kmaster-01 kubelet[1847]: E0725 16:14:12.397852 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "b86c36dd68ecf4489ea488b1b28195d92deacfb2c8436bd16309ae3472095909" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: b86c36dd68ecf4489ea488b1b28195d92deacfb2c8436bd16309ae3472095909
Jul 25 16:14:13 kmaster-01 kubelet[1847]: I0725 16:14:13.880291 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:14:13 kmaster-01 kubelet[1847]: E0725 16:14:13.880740 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:14:16 kmaster-01 kubelet[1847]: E0725 16:14:16.487431 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "33ad1cb62f24b37fa057a88527c4e4cd7fb8692aad93d01ca10161d3ebf634e0" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: 33ad1cb62f24b37fa057a88527c4e4cd7fb8692aad93d01ca10161d3ebf634e0
Jul 25 16:14:22 kmaster-01 kubelet[1847]: E0725 16:14:22.844722 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_331 (d1a4b42ba34397f27e42e3f0a0aff362ce2c449ebf83de5889c23dbb9aec00a5): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:22 kmaster-01 kubelet[1847]: E0725 16:14:22.844790 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_331 (d1a4b42ba34397f27e42e3f0a0aff362ce2c449ebf83de5889c23dbb9aec00a5): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:22 kmaster-01 kubelet[1847]: E0725 16:14:22.844817 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_331 (d1a4b42ba34397f27e42e3f0a0aff362ce2c449ebf83de5889c23dbb9aec00a5): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:22 kmaster-01 kubelet[1847]: E0725 16:14:22.844926 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_331 (d1a4b42ba34397f27e42e3f0a0aff362ce2c449ebf83de5889c23dbb9aec00a5): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:14:22 kmaster-01 kubelet[1847]: W0725 16:14:22.848124 1847 pod_container_deletor.go:77] Container "33ad1cb62f24b37fa057a88527c4e4cd7fb8692aad93d01ca10161d3ebf634e0" not found in pod's containers
Jul 25 16:14:24 kmaster-01 kubelet[1847]: I0725 16:14:24.879671 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:14:24 kmaster-01 kubelet[1847]: E0725 16:14:24.880024 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:14:26 kmaster-01 kubelet[1847]: E0725 16:14:26.918212 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "21e78398896143b738b19cd3697a77630a326a0b60e5285a457591010447b08e" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: 21e78398896143b738b19cd3697a77630a326a0b60e5285a457591010447b08e
Jul 25 16:14:28 kmaster-01 kubelet[1847]: W0725 16:14:28.212693 1847 pod_container_deletor.go:77] Container "21e78398896143b738b19cd3697a77630a326a0b60e5285a457591010447b08e" not found in pod's containers
Jul 25 16:14:33 kmaster-01 kubelet[1847]: E0725 16:14:33.718093 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_332 (14e252cf581c8607f14ece6ba49722ed4ebb9b2dcd26e6ee8947d8075906224a): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:33 kmaster-01 kubelet[1847]: E0725 16:14:33.718231 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_332 (14e252cf581c8607f14ece6ba49722ed4ebb9b2dcd26e6ee8947d8075906224a): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:33 kmaster-01 kubelet[1847]: E0725 16:14:33.718260 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_332 (14e252cf581c8607f14ece6ba49722ed4ebb9b2dcd26e6ee8947d8075906224a): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:33 kmaster-01 kubelet[1847]: E0725 16:14:33.718349 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_332 (14e252cf581c8607f14ece6ba49722ed4ebb9b2dcd26e6ee8947d8075906224a): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:14:36 kmaster-01 kubelet[1847]: I0725 16:14:36.879814 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:14:36 kmaster-01 kubelet[1847]: E0725 16:14:36.880151 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:14:37 kmaster-01 kubelet[1847]: E0725 16:14:37.372254 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "3ac5ff99c6301fa4dc0dd16f8e7e53b988328e5e802aff20aaece83584708458" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: 3ac5ff99c6301fa4dc0dd16f8e7e53b988328e5e802aff20aaece83584708458
Jul 25 16:14:38 kmaster-01 kubelet[1847]: W0725 16:14:38.958423 1847 pod_container_deletor.go:77] Container "3ac5ff99c6301fa4dc0dd16f8e7e53b988328e5e802aff20aaece83584708458" not found in pod's containers
Jul 25 16:14:44 kmaster-01 kubelet[1847]: E0725 16:14:44.508159 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_333 (4f8a5fae5129da438488509b723e1d00764b24101ed4904a760bb09952c89a95): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:44 kmaster-01 kubelet[1847]: E0725 16:14:44.508224 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_333 (4f8a5fae5129da438488509b723e1d00764b24101ed4904a760bb09952c89a95): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:44 kmaster-01 kubelet[1847]: E0725 16:14:44.508247 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_333 (4f8a5fae5129da438488509b723e1d00764b24101ed4904a760bb09952c89a95): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:44 kmaster-01 kubelet[1847]: E0725 16:14:44.508331 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_333 (4f8a5fae5129da438488509b723e1d00764b24101ed4904a760bb09952c89a95): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
Jul 25 16:14:47 kmaster-01 kubelet[1847]: I0725 16:14:47.880056 1847 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 96806baf5a502c21d679a6e2072318035c45551bfd5c05e7311b9b0c2ca1c423
Jul 25 16:14:47 kmaster-01 kubelet[1847]: E0725 16:14:47.880379 1847 pod_workers.go:191] Error syncing pod 864baeac-abd1-4c51-9883-245eb37be672 ("liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"), skipping: failed to "StartContainer" for "liveness" with CrashLoopBackOff: "back-off 5m0s restarting failed container=liveness pod=liveness-exec_default(864baeac-abd1-4c51-9883-245eb37be672)"
Jul 25 16:14:48 kmaster-01 kubelet[1847]: E0725 16:14:48.107483 1847 kuberuntime_manager.go:937] PodSandboxStatus of sandbox "50f481e1b6d47d9f179da6c400be14c9d293f9716b82f290fb53e36ebbcfdabc" for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" error: rpc error: code = Unknown desc = Error: No such container: 50f481e1b6d47d9f179da6c400be14c9d293f9716b82f290fb53e36ebbcfdabc
Jul 25 16:14:49 kmaster-01 kubelet[1847]: W0725 16:14:49.535925 1847 pod_container_deletor.go:77] Container "50f481e1b6d47d9f179da6c400be14c9d293f9716b82f290fb53e36ebbcfdabc" not found in pod's containers
Jul 25 16:14:55 kmaster-01 kubelet[1847]: E0725 16:14:55.703266 1847 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_334 (808d4f7cd6119683d39e8c0ce7667d4980ce477624783c1b9f842c41e221c616): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:55 kmaster-01 kubelet[1847]: E0725 16:14:55.703383 1847 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_334 (808d4f7cd6119683d39e8c0ce7667d4980ce477624783c1b9f842c41e221c616): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:55 kmaster-01 kubelet[1847]: E0725 16:14:55.703414 1847 kuberuntime_manager.go:727] createPodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_334 (808d4f7cd6119683d39e8c0ce7667d4980ce477624783c1b9f842c41e221c616): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Jul 25 16:14:55 kmaster-01 kubelet[1847]: E0725 16:14:55.703514 1847 pod_workers.go:191] Error syncing pod bf410a49-d43a-4050-9709-204e0f4d3ea6 ("ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)"), skipping: failed to "CreatePodSandbox" for "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" with CreatePodSandboxError: "CreatePodSandbox for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system(bf410a49-d43a-4050-9709-204e0f4d3ea6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "ingress-nginx-controller-69ccf5d9d8-7qvfn": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_ingress-nginx-controller-69ccf5d9d8-7qvfn_kube-system_bf410a49-d43a-4050-9709-204e0f4d3ea6_334 (808d4f7cd6119683d39e8c0ce7667d4980ce477624783c1b9f842c41e221c616): Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use"
==> kubernetes-dashboard [d616f68c7110] <==
2020/07/24 16:29:37 [2020-07-24T16:29:37Z] Incoming HTTP/1.1 GET /api/v1/ingress/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:37 [2020-07-24T16:29:37Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:37 [2020-07-24T16:29:37Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:37 Getting list of all services in the cluster
2020/07/24 16:29:37 [2020-07-24T16:29:37Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:38 [2020-07-24T16:29:38Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 172.17.0.1:56256:
2020/07/24 16:29:38 Getting list of namespaces
2020/07/24 16:29:38 [2020-07-24T16:29:38Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:39 [2020-07-24T16:29:39Z] Incoming HTTP/1.1 GET /api/v1/cronjob/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:39 Getting list of all cron jobs in the cluster
2020/07/24 16:29:39 [2020-07-24T16:29:39Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:40 [2020-07-24T16:29:40Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:40 [2020-07-24T16:29:40Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:41 [2020-07-24T16:29:41Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:41 Getting list of all deployments in the cluster
2020/07/24 16:29:41 received 0 resources from sidecar instead of 3
2020/07/24 16:29:41 received 0 resources from sidecar instead of 3
2020/07/24 16:29:41 [2020-07-24T16:29:41Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:42 [2020-07-24T16:29:42Z] Incoming HTTP/1.1 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:42 Getting list of all jobs in the cluster
2020/07/24 16:29:42 [2020-07-24T16:29:42Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:45 [2020-07-24T16:29:45Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:45 Getting list of all pods in the cluster
2020/07/24 16:29:45 received 0 resources from sidecar instead of 3
2020/07/24 16:29:45 received 0 resources from sidecar instead of 3
2020/07/24 16:29:45 Getting pod metrics
2020/07/24 16:29:45 received 0 resources from sidecar instead of 3
2020/07/24 16:29:45 received 0 resources from sidecar instead of 3
2020/07/24 16:29:45 Skipping metric because of error: Metric label not set.
2020/07/24 16:29:45 Skipping metric because of error: Metric label not set.
2020/07/24 16:29:45 Skipping metric because of error: Metric label not set.
2020/07/24 16:29:45 Skipping metric because of error: Metric label not set.
2020/07/24 16:29:45 Skipping metric because of error: Metric label not set.
2020/07/24 16:29:45 Skipping metric because of error: Metric label not set.
2020/07/24 16:29:45 [2020-07-24T16:29:45Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:47 [2020-07-24T16:29:47Z] Incoming HTTP/1.1 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:47 Getting list of all replica sets in the cluster
2020/07/24 16:29:47 received 0 resources from sidecar instead of 3
2020/07/24 16:29:47 received 0 resources from sidecar instead of 3
2020/07/24 16:29:47 [2020-07-24T16:29:47Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:49 [2020-07-24T16:29:49Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:49 Getting list of all replication controllers in the cluster
2020/07/24 16:29:49 [2020-07-24T16:29:49Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:55 [2020-07-24T16:29:55Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:55 Getting list of all pet sets in the cluster
2020/07/24 16:29:55 [2020-07-24T16:29:55Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:57 [2020-07-24T16:29:57Z] Incoming HTTP/1.1 GET /api/v1/ingress/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:57 [2020-07-24T16:29:57Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:29:59 [2020-07-24T16:29:59Z] Incoming HTTP/1.1 GET /api/v1/service/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:29:59 Getting list of all services in the cluster
2020/07/24 16:29:59 [2020-07-24T16:29:59Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:30:04 [2020-07-24T16:30:04Z] Incoming HTTP/1.1 GET /api/v1/configmap/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:30:04 Getting list config maps in the namespace default
2020/07/24 16:30:04 [2020-07-24T16:30:04Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:30:06 [2020-07-24T16:30:06Z] Incoming HTTP/1.1 GET /api/v1/persistentvolumeclaim/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:30:06 Getting list persistent volumes claims
2020/07/24 16:30:06 [2020-07-24T16:30:06Z] Outcoming response to 172.17.0.1:56256 with 200 status code
2020/07/24 16:30:11 [2020-07-24T16:30:11Z] Incoming HTTP/1.1 GET /api/v1/secret/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 172.17.0.1:56256:
2020/07/24 16:30:11 Getting list of secrets in &{[default]} namespace
2020/07/24 16:30:11 [2020-07-24T16:30:11Z] Outcoming response to 172.17.0.1:56256 with 200 status code
==> storage-provisioner [01fdf86ab4a2] <==
==> storage-provisioner [ca0634e133af] <==
root@kmaster-01:/etc/kubernetes/Ingress#
The text was updated successfully, but these errors were encountered: