You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am having trouble getting host.minikube.internal to resolve on my Mac host. It only seems to resolve correctly within containers, which unfortunately makes it a lot harder to developer frontend applications. Particularly when redirects are involved between multiple minikube containers.
Please make sure that the host.minikube.internal domain can resolve from the perspective of the host. So that frontend applications with redirects between containers will be able to intuitively work.
Steps to reproduce the issue:
minikube start
Configure any two HTTP applications to redirect to each other, e.g. Ruby on Rails.
Deploy the applications in minikube.
kubectl port-forward the applications.
Try to redirect from one host.minikube.internal port to another host.minikube.internal port.
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I0726 18:33:56.499189 1 event.go:291] "Event occurred" object="default/kudo-auth-service-b5fb4f998" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-b5fb4f998-w8hwf"
I0726 18:36:19.089177 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:36:19.106508 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-gckd5"
I0726 18:36:41.049078 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:36:41.056554 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-mr5g7"
I0726 18:38:37.634266 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:38:37.650814 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-mdq2n"
I0726 18:39:03.871529 1 event.go:291] "Event occurred" object="kube-system/registry-creds" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set registry-creds-85b974c7d7 to 1"
I0726 18:39:03.934580 1 event.go:291] "Event occurred" object="kube-system/registry-creds-85b974c7d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-creds-85b974c7d7-h96t2"
I0726 18:39:14.272970 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:39:14.295636 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-l2qzr"
I0726 18:52:17.847676 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-5587d67fbb to 1"
I0726 18:52:17.862094 1 event.go:291] "Event occurred" object="default/kudo-auth-service-5587d67fbb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-5587d67fbb-9pc94"
I0726 18:54:43.922557 1 event.go:291] "Event occurred" object="default/auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set auth-service-5544f8f75d to 1"
I0726 18:54:43.930738 1 event.go:291] "Event occurred" object="default/auth-service-5544f8f75d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: auth-service-5544f8f75d-z2zw9"
I0726 20:33:18.617715 1 cleaner.go:180] Cleaning CSR "csr-svcqq" as it is more than 1h0m0s old and approved.
E0726 23:09:48.109494 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0726 23:09:48.116277 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 02:00:13.481544 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0727 02:00:13.481551 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 12:44:30.495040 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0727 12:44:30.495023 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 15:03:58.528707 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0727 15:03:58.528729 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0727 17:51:28.109264 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 17:51:28.125556 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 01:33:17.978345 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 01:33:18.037941 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 12:47:52.804388 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 12:47:52.904316 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 14:55:43.199852 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 14:55:43.246170 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 17:58:46.932393 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 17:58:47.019846 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0728 21:40:04.673774 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 21:40:04.673846 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0729 12:56:50.192441 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0729 12:56:50.611737 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0729 19:03:41.701575 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0729 19:03:41.701889 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0730 07:06:53.151016 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0730 07:06:53.216307 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0730 16:50:31.279367 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0730 16:50:31.653367 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0730 17:59:19.014468 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0730 17:59:19.339104 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0731 00:30:03.012033 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0731 00:30:03.295075 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0801 23:51:31.749627 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0801 23:51:31.749627 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0802 09:45:07.807829 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0802 09:45:07.807927 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0802 17:05:29.897665 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0802 17:05:29.897665 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0802 18:08:38.833639 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0802 18:08:38.834050 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0802 21:33:36.756468 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0802 21:33:36.756495 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0803 12:59:17.531844 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0803 12:59:17.531889 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
==> kube-controller-manager [aa431070033b] <==
I0810 21:42:51.357652 1 shared_informer.go:247] Caches are synced for deployment
I0810 21:42:51.793142 1 shared_informer.go:247] Caches are synced for garbage collector
I0810 21:42:51.807581 1 shared_informer.go:247] Caches are synced for garbage collector
I0810 21:42:51.807646 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
E0810 23:42:11.072566 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0810 23:42:11.074575 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0811 12:58:12.058202 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0811 12:58:12.058202 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0811 17:12:03.840496 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0811 17:12:03.840526 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
I0811 17:26:22.433234 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-69688d4579 to 1"
I0811 17:26:22.433431 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-667c95f5b8 to 1"
I0811 17:26:22.496612 1 event.go:291] "Event occurred" object="default/dev-login-69688d4579" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-69688d4579-4gdhz"
I0811 17:26:22.499530 1 event.go:291] "Event occurred" object="default/hydra-667c95f5b8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-667c95f5b8-t45ht"
I0811 17:27:13.345094 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-77954dc59b to 1"
I0811 17:27:13.393715 1 event.go:291] "Event occurred" object="default/meetings-77954dc59b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-77954dc59b-nfpmk"
I0811 17:28:01.292815 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-8bcb8d68f to 1"
I0811 17:28:01.293365 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-6d485cb46c to 1"
I0811 17:28:01.389412 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5796658bb4 to 1"
I0811 17:28:01.390288 1 event.go:291] "Event occurred" object="default/dev-login-6d485cb46c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-6d485cb46c-472jc"
I0811 17:28:01.391926 1 event.go:291] "Event occurred" object="default/hydra-8bcb8d68f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-8bcb8d68f-s4q2q"
I0811 17:28:01.488154 1 event.go:291] "Event occurred" object="default/meetings-5796658bb4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5796658bb4-w88ch"
I0811 17:41:55.828225 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:41:55.828319 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:41:55.830248 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:41:55.859355 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-fsgt6"
I0811 17:41:55.859440 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-b9rbz"
I0811 17:41:55.865379 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-4xbll"
I0811 17:42:20.753579 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:42:20.768368 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-w2qlb"
I0811 17:42:20.774364 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:42:20.788463 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-b2r28"
I0811 17:42:20.891258 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:42:20.921016 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-jmwwm"
I0811 17:46:11.289941 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:46:11.290025 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:46:11.291849 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:46:11.305681 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-2gtw7"
I0811 17:46:11.308453 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-jbwnz"
I0811 17:46:11.311909 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-47dct"
I0811 17:46:18.160521 1 event.go:291] "Event occurred" object="default/meetings" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint default/meetings: Operation cannot be fulfilled on endpoints "meetings": the object has been modified; please apply your changes to the latest version and try again"
I0811 17:46:21.167364 1 event.go:291] "Event occurred" object="default/dev-login" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint default/dev-login: Operation cannot be fulfilled on endpoints "dev-login": the object has been modified; please apply your changes to the latest version and try again"
I0811 17:49:51.011376 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:49:51.017527 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:49:51.029204 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-7nnq8"
I0811 17:49:51.029764 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-6xjvz"
I0811 17:49:51.040074 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:49:51.068439 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-zbwwl"
I0811 17:52:41.039085 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:52:41.049661 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-dlw7p"
I0811 17:52:41.051425 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-d9c6b5dcb to 1"
I0811 17:52:41.060371 1 event.go:291] "Event occurred" object="default/dev-login-d9c6b5dcb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-d9c6b5dcb-ndz5v"
I0811 17:52:41.158308 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:52:41.171037 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-fx7f5"
I0811 17:54:42.454143 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6f6468fb4c to 1"
I0811 17:54:42.467471 1 event.go:291] "Event occurred" object="default/hydra-6f6468fb4c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6f6468fb4c-sz54x"
I0811 17:54:42.467508 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-5f7c84dcff to 1"
I0811 17:54:42.471722 1 event.go:291] "Event occurred" object="default/dev-login-5f7c84dcff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-5f7c84dcff-qbqcm"
I0811 17:54:42.518243 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5db87dbdff to 1"
I0811 17:54:42.532086 1 event.go:291] "Event occurred" object="default/meetings-5db87dbdff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5db87dbdff-wl5pm"
==> kube-proxy [5ad45854bdd9] <==
W0730 14:18:40.714473 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 14:26:40.182694 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 14:32:05.794435 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 16:51:05.164603 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 17:59:56.535691 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 12:58:15.333764 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:08:02.628292 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:14:49.137170 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:22:14.704450 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:29:58.151573 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:35:04.862006 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:44:07.247538 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:51:05.802996 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:58:04.331950 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:04:00.925246 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:11:54.383570 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:19:21.893873 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:26:20.423755 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:33:12.989753 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:42:05.387809 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:51:10.734203 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:00:52.047985 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:05:52.739934 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:14:06.199313 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:19:19.813619 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:26:38.338285 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:05:59.257282 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:13:39.742575 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:20:03.295155 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:25:47.882865 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:13:46.954751 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:33:37.167942 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:41:55.576842 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:47:28.193669 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:54:52.706015 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:00:11.327645 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:08:24.768839 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:15:55.245554 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:25:02.617731 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:31:11.200563 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:36:34.926007 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:43:22.448430 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:53:09.797808 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:00:37.284870 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:10:32.618793 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:19:11.034811 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:25:41.590346 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:00:05.194186 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:09:53.486659 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:16:33.028353 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:21:40.676508 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:28:54.147182 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:34:42.867134 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:40:04.530065 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:50:02.853496 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:59:20.210673 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:08:37.638588 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:15:41.134187 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:25:09.492284 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:34:56.850466 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
==> kube-proxy [61c7576e859d] <==
I0810 21:42:45.386567 1 server_others.go:140] Detected node IP 192.168.49.2
W0810 21:42:45.386618 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0810 21:42:45.478849 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0810 21:42:45.478964 1 server_others.go:212] Using iptables Proxier.
I0810 21:42:45.479005 1 server_others.go:219] creating dualStackProxier for iptables.
W0810 21:42:45.479110 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0810 21:42:45.481806 1 server.go:643] Version: v1.21.2
I0810 21:42:45.482881 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0810 21:42:45.482947 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0810 21:42:45.485395 1 config.go:315] Starting service config controller
I0810 21:42:45.485532 1 config.go:224] Starting endpoint slice config controller
I0810 21:42:45.486411 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0810 21:42:45.487661 1 shared_informer.go:240] Waiting for caches to sync for service config
W0810 21:42:45.493790 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 21:42:45.495864 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0810 21:42:45.586678 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0810 21:42:45.589483 1 shared_informer.go:247] Caches are synced for service config
W0810 21:51:12.921716 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 21:57:52.478213 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:06:42.868744 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:16:28.226326 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:21:53.856143 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:31:15.207292 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:40:17.593883 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:46:36.151686 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:00:31.982933 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:06:49.526468 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:15:50.899308 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:23:17.378600 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:29:29.973532 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:35:17.556637 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:42:40.035328 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:51:37.444282 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:59:23.887708 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:08:16.358887 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:18:09.678830 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:24:51.238894 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:34:46.497059 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:43:10.903188 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:49:01.519927 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:56:56.960182 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:05:28.366078 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:13:09.877979 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:21:14.326779 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:27:48.879595 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:37:00.307072 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:43:24.863004 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:52:25.247678 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:58:06.838890 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 16:07:40.187659 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 16:15:21.742323 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 16:23:42.168968 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:07:26.895878 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:14:59.372227 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:21:19.920763 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:30:50.307487 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:37:50.906747 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:44:14.500347 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:53:20.966063 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:59:49.520700 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
==> kube-scheduler [444f9d5def83] <==
I0810 21:42:32.192914 1 serving.go:347] Generated self-signed cert in-memory
W0810 21:42:37.751137 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0810 21:42:37.751281 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0810 21:42:37.751309 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0810 21:42:37.751328 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0810 21:42:37.969364 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0810 21:42:37.972674 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0810 21:42:37.972918 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0810 21:42:37.973050 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0810 21:42:38.174116 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [b5f048955e8f] <==
I0726 18:33:19.432183 1 serving.go:347] Generated self-signed cert in-memory
W0726 18:33:23.653172 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0726 18:33:23.653517 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0726 18:33:23.653734 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0726 18:33:23.653918 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0726 18:33:23.840215 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0726 18:33:23.840588 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0726 18:33:23.844310 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0726 18:33:23.844418 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0726 18:33:23.848591 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:23.849351 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:23.859711 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0726 18:33:23.860233 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0726 18:33:23.860252 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0726 18:33:23.861194 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0726 18:33:23.861350 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0726 18:33:23.861471 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0726 18:33:23.861680 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0726 18:33:23.861829 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0726 18:33:23.861967 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0726 18:33:23.862162 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:23.928284 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0726 18:33:23.929198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:24.668416 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:24.739675 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0726 18:33:24.757037 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0726 18:33:24.832570 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0726 18:33:24.839722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0726 18:33:24.852919 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0726 18:33:24.944301 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0726 18:33:25.043969 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0726 18:33:25.055946 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:25.072559 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:25.096445 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0726 18:33:28.042116 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Tue 2021-08-10 21:42:05 UTC, end at Wed 2021-08-11 18:00:35 UTC. --
Aug 11 17:54:10 minikube kubelet[1170]: E0811 17:54:10.820065 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 1m20s restarting failed container=meetings pod=meetings-5fd8b9d97f-fx7f5_default(681ed0d3-7255-459a-b96c-36e019b9e9f5)"" pod="default/meetings-5fd8b9d97f-fx7f5" podUID=681ed0d3-7255-459a-b96c-36e019b9e9f5
Aug 11 17:54:11 minikube kubelet[1170]: I0811 17:54:11.832071 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5fd8b9d97f-fx7f5 through plugin: invalid network status for"
Aug 11 17:54:25 minikube kubelet[1170]: I0811 17:54:25.674739 1170 scope.go:111] "RemoveContainer" containerID="5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f"
Aug 11 17:54:25 minikube kubelet[1170]: E0811 17:54:25.675399 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 1m20s restarting failed container=meetings pod=meetings-5fd8b9d97f-fx7f5_default(681ed0d3-7255-459a-b96c-36e019b9e9f5)"" pod="default/meetings-5fd8b9d97f-fx7f5" podUID=681ed0d3-7255-459a-b96c-36e019b9e9f5
Aug 11 17:54:37 minikube kubelet[1170]: I0811 17:54:37.674430 1170 scope.go:111] "RemoveContainer" containerID="5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f"
Aug 11 17:54:37 minikube kubelet[1170]: E0811 17:54:37.675118 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 1m20s restarting failed container=meetings pod=meetings-5fd8b9d97f-fx7f5_default(681ed0d3-7255-459a-b96c-36e019b9e9f5)"" pod="default/meetings-5fd8b9d97f-fx7f5" podUID=681ed0d3-7255-459a-b96c-36e019b9e9f5
Aug 11 17:54:41 minikube kubelet[1170]: I0811 17:54:41.236671 1170 scope.go:111] "RemoveContainer" containerID="5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.327480 1170 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-smt7h" (UniqueName: "kubernetes.io/projected/681ed0d3-7255-459a-b96c-36e019b9e9f5-kube-api-access-smt7h") pod "681ed0d3-7255-459a-b96c-36e019b9e9f5" (UID: "681ed0d3-7255-459a-b96c-36e019b9e9f5") "
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.330346 1170 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681ed0d3-7255-459a-b96c-36e019b9e9f5-kube-api-access-smt7h" (OuterVolumeSpecName: "kube-api-access-smt7h") pod "681ed0d3-7255-459a-b96c-36e019b9e9f5" (UID: "681ed0d3-7255-459a-b96c-36e019b9e9f5"). InnerVolumeSpecName "kube-api-access-smt7h". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.428586 1170 reconciler.go:319] "Volume detached for volume "kube-api-access-smt7h" (UniqueName: "kubernetes.io/projected/681ed0d3-7255-459a-b96c-36e019b9e9f5-kube-api-access-smt7h") on node "minikube" DevicePath """
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.474442 1170 topology_manager.go:187] "Topology Admit Handler"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.508782 1170 topology_manager.go:187] "Topology Admit Handler"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.618455 1170 topology_manager.go:187] "Topology Admit Handler"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.629791 1170 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-h47jg" (UniqueName: "kubernetes.io/projected/9857b4aa-4c34-4e33-815d-c7f45595c8ce-kube-api-access-h47jg") pod "dev-login-5f7c84dcff-qbqcm" (UID: "9857b4aa-4c34-4e33-815d-c7f45595c8ce") "
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.629894 1170 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-c85cc" (UniqueName: "kubernetes.io/projected/8a7c8f8b-895b-4ecb-8dd1-a6148a36c111-kube-api-access-c85cc") pod "hydra-6f6468fb4c-sz54x" (UID: "8a7c8f8b-895b-4ecb-8dd1-a6148a36c111") "
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.730678 1170 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-99kfq" (UniqueName: "kubernetes.io/projected/96449dce-9e25-44ee-87af-a1d3b7777aa1-kube-api-access-99kfq") pod "meetings-5db87dbdff-wl5pm" (UID: "96449dce-9e25-44ee-87af-a1d3b7777aa1") "
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.728409 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hydra-6f6468fb4c-sz54x through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.736198 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.880735 1170 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c240a97801b37eebe0911ec9775abb5e4eeabca16c3419db371dbd01d0a5fb13"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.885276 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.888797 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hydra-6f6468fb4c-sz54x through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.895827 1170 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3ed52d81849b4c5dedcfe627dc78e904619a1e68fa487fd5fc659c89d8d69c98"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.899481 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.906533 1170 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7b6c68238d2cd825c10eea4ca6c331dec1f767129b7d4bf53a9f3b44e9f66a21"
Aug 11 17:54:44 minikube kubelet[1170]: I0811 17:54:44.926337 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:44 minikube kubelet[1170]: I0811 17:54:44.932698 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:44 minikube kubelet[1170]: I0811 17:54:44.938697 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hydra-6f6468fb4c-sz54x through plugin: invalid network status for"
Aug 11 17:54:45 minikube kubelet[1170]: I0811 17:54:45.957855 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:46 minikube kubelet[1170]: I0811 17:54:46.977308 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:46 minikube kubelet[1170]: I0811 17:54:46.984389 1170 scope.go:111] "RemoveContainer" containerID="4a35c1b85b8d21b25ea5e735624a426e3ef672188e3d967539ee4cdf6a527748"
Aug 11 17:54:48 minikube kubelet[1170]: I0811 17:54:48.002869 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:48 minikube kubelet[1170]: I0811 17:54:48.014080 1170 scope.go:111] "RemoveContainer" containerID="4a35c1b85b8d21b25ea5e735624a426e3ef672188e3d967539ee4cdf6a527748"
Aug 11 17:54:48 minikube kubelet[1170]: I0811 17:54:48.014589 1170 scope.go:111] "RemoveContainer" containerID="16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4"
Aug 11 17:54:48 minikube kubelet[1170]: E0811 17:54:48.015100 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 10s restarting failed container=meetings pod=meetings-5db87dbdff-wl5pm_default(96449dce-9e25-44ee-87af-a1d3b7777aa1)"" pod="default/meetings-5db87dbdff-wl5pm" podUID=96449dce-9e25-44ee-87af-a1d3b7777aa1
Aug 11 17:54:49 minikube kubelet[1170]: I0811 17:54:49.039202 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:49 minikube kubelet[1170]: I0811 17:54:49.047816 1170 scope.go:111] "RemoveContainer" containerID="16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4"
Aug 11 17:54:49 minikube kubelet[1170]: E0811 17:54:49.048277 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 10s restarting failed container=meetings pod=meetings-5db87dbdff-wl5pm_default(96449dce-9e25-44ee-87af-a1d3b7777aa1)"" pod="default/meetings-5db87dbdff-wl5pm" podUID=96449dce-9e25-44ee-87af-a1d3b7777aa1
Aug 11 17:54:58 minikube kubelet[1170]: E0811 17:54:58.264860 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:02 minikube kubelet[1170]: I0811 17:55:02.639812 1170 scope.go:111] "RemoveContainer" containerID="16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4"
Aug 11 17:55:04 minikube kubelet[1170]: I0811 17:55:04.246279 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:55:05 minikube kubelet[1170]: I0811 17:55:05.357729 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:55:08 minikube kubelet[1170]: E0811 17:55:08.381548 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:10 minikube kubelet[1170]: I0811 17:55:10.567805 1170 scope.go:111] "RemoveContainer" containerID="0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e"
Aug 11 17:55:10 minikube kubelet[1170]: I0811 17:55:10.607403 1170 scope.go:111] "RemoveContainer" containerID="06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16"
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.783992 1170 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-w966b" (UniqueName: "kubernetes.io/projected/2ad4851c-8380-4a9f-937c-233630dc97d7-kube-api-access-w966b") pod "2ad4851c-8380-4a9f-937c-233630dc97d7" (UID: "2ad4851c-8380-4a9f-937c-233630dc97d7") "
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.784067 1170 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-pt74p" (UniqueName: "kubernetes.io/projected/37b7f155-4fd6-443b-a319-e707ebe9f2c1-kube-api-access-pt74p") pod "37b7f155-4fd6-443b-a319-e707ebe9f2c1" (UID: "37b7f155-4fd6-443b-a319-e707ebe9f2c1") "
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.786645 1170 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b7f155-4fd6-443b-a319-e707ebe9f2c1-kube-api-access-pt74p" (OuterVolumeSpecName: "kube-api-access-pt74p") pod "37b7f155-4fd6-443b-a319-e707ebe9f2c1" (UID: "37b7f155-4fd6-443b-a319-e707ebe9f2c1"). InnerVolumeSpecName "kube-api-access-pt74p". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.786827 1170 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad4851c-8380-4a9f-937c-233630dc97d7-kube-api-access-w966b" (OuterVolumeSpecName: "kube-api-access-w966b") pod "2ad4851c-8380-4a9f-937c-233630dc97d7" (UID: "2ad4851c-8380-4a9f-937c-233630dc97d7"). InnerVolumeSpecName "kube-api-access-w966b". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.884673 1170 reconciler.go:319] "Volume detached for volume "kube-api-access-w966b" (UniqueName: "kubernetes.io/projected/2ad4851c-8380-4a9f-937c-233630dc97d7-kube-api-access-w966b") on node "minikube" DevicePath """
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.884733 1170 reconciler.go:319] "Volume detached for volume "kube-api-access-pt74p" (UniqueName: "kubernetes.io/projected/37b7f155-4fd6-443b-a319-e707ebe9f2c1-kube-api-access-pt74p") on node "minikube" DevicePath """
Aug 11 17:55:18 minikube kubelet[1170]: E0811 17:55:18.433073 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/37b7f155-4fd6-443b-a319-e707ebe9f2c1/etc-hosts with error exit status 1" pod="default/dev-login-d9c6b5dcb-ndz5v"
Aug 11 17:55:18 minikube kubelet[1170]: E0811 17:55:18.439660 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:27 minikube kubelet[1170]: I0811 17:55:27.334359 1170 log.go:184] http: superfluous response.WriteHeader call from k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader (httplog.go:217)
Aug 11 17:55:28 minikube kubelet[1170]: E0811 17:55:28.516062 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:28 minikube kubelet[1170]: E0811 17:55:28.524908 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/37b7f155-4fd6-443b-a319-e707ebe9f2c1/etc-hosts with error exit status 1" pod="default/dev-login-d9c6b5dcb-ndz5v"
Aug 11 17:55:28 minikube kubelet[1170]: E0811 17:55:28.541800 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/2ad4851c-8380-4a9f-937c-233630dc97d7/etc-hosts with error exit status 1" pod="default/hydra-6699d9db88-dlw7p"
Aug 11 17:55:30 minikube kubelet[1170]: W0811 17:55:30.140819 1170 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 11 17:55:30 minikube kubelet[1170]: E0811 17:55:30.168289 1170 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/3b987718669a210c1a55f59cfb33139450941a2fe5c60f4692b5dd056683eb96/diff" to get inode usage: stat /var/lib/docker/overlay2/3b987718669a210c1a55f59cfb33139450941a2fe5c60f4692b5dd056683eb96/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e" to get inode usage: stat /var/lib/docker/containers/0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e: no such file or directory
Aug 11 17:55:30 minikube kubelet[1170]: E0811 17:55:30.169852 1170 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/a892d6190a17c53d3933f049fe9a26b2f26558cac808021451cc01b21cb3cb21/diff" to get inode usage: stat /var/lib/docker/overlay2/a892d6190a17c53d3933f049fe9a26b2f26558cac808021451cc01b21cb3cb21/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16" to get inode usage: stat /var/lib/docker/containers/06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16: no such file or directory
Aug 11 18:00:29 minikube kubelet[1170]: W0811 18:00:29.795827 1170 sysinfo.go:203] Nodes topology is not available, providing CPU topology
==> storage-provisioner [01a3799e1aff] <==
I0810 21:42:43.758848 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0810 21:43:13.739450 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
Hi @mcandre, we haven't heard back from you, do you still have this issue?
There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
I am having trouble getting
host.minikube.internal
to resolve on my Mac host. It only seems to resolve correctly within containers, which unfortunately makes it a lot harder to developer frontend applications. Particularly when redirects are involved between multiple minikube containers.Please make sure that the
host.minikube.internal
domain can resolve from the perspective of the host. So that frontend applications with redirects between containers will be able to intuitively work.Steps to reproduce the issue:
minikube start
Full output of
minikube logs
command:Running on machine: lounge
Binary: Built with gc go1.16.5 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0810 16:42:00.805270 35411 out.go:286] Setting OutFile to fd 1 ...
I0810 16:42:00.806305 35411 out.go:338] isatty.IsTerminal(1) = true
I0810 16:42:00.806309 35411 out.go:299] Setting ErrFile to fd 2...
I0810 16:42:00.806313 35411 out.go:338] isatty.IsTerminal(2) = true
I0810 16:42:00.806425 35411 root.go:312] Updating PATH: /Users/andrew/.minikube/bin
I0810 16:42:00.812041 35411 out.go:293] Setting JSON to false
I0810 16:42:00.859709 35411 start.go:111] hostinfo: {"hostname":"lounge.attlocal.net","uptime":2363778,"bootTime":1626267942,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.4","kernelVersion":"20.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"52a1e876-863e-38e3-ac80-09bbab13b752"}
W0810 16:42:00.859843 35411 start.go:119] gopshost.Virtualization returned error: not implemented yet
I0810 16:42:00.882303 35411 out.go:165] 😄 minikube v1.22.0 on Darwin 11.4
I0810 16:42:00.882838 35411 notify.go:169] Checking for updates...
I0810 16:42:00.905684 35411 driver.go:335] Setting default libvirt URI to qemu:///system
I0810 16:42:01.404510 35411 docker.go:132] docker version: linux-20.10.7
I0810 16:42:01.404992 35411 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0810 16:42:02.176412 35411 info.go:263] docker info: {ID:OYGX:6HKE:2RMX:YBRV:UMB2:KB7K:3S33:BOLJ:IIIQ:YOIH:LH37:UKE3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:22 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-10 21:42:01.563434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:2083807232 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0-beta.6] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
I0810 16:42:02.197211 35411 out.go:165] ✨ Using the docker driver based on existing profile
I0810 16:42:02.197266 35411 start.go:278] selected driver: docker
I0810 16:42:02.197279 35411 start.go:751] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:1987 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true registry-creds:true storage-provisioner:true] CustomAddonImages:map[RegistryCreds:upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0810 16:42:02.197382 35411 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I0810 16:42:02.197925 35411 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0810 16:42:02.505158 35411 info.go:263] docker info: {ID:OYGX:6HKE:2RMX:YBRV:UMB2:KB7K:3S33:BOLJ:IIIQ:YOIH:LH37:UKE3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:22 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-08-10 21:42:02.3593804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:2083807232 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.0.0-beta.6] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}}
I0810 16:42:02.506253 35411 cni.go:93] Creating CNI manager for ""
I0810 16:42:02.506269 35411 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0810 16:42:02.506280 35411 start_flags.go:275] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:1987 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true registry-creds:true storage-provisioner:true] CustomAddonImages:map[RegistryCreds:upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0810 16:42:02.526121 35411 out.go:165] 👍 Starting control plane node minikube in cluster minikube
I0810 16:42:02.526670 35411 cache.go:117] Beginning downloading kic base image for docker with docker
I0810 16:42:02.565226 35411 out.go:165] 🚜 Pulling base image ...
I0810 16:42:02.565845 35411 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker
I0810 16:42:02.565890 35411 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
I0810 16:42:02.566755 35411 preload.go:150] Found local preload: /Users/andrew/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4
I0810 16:42:02.566781 35411 cache.go:56] Caching tarball of preloaded images
I0810 16:42:02.585337 35411 preload.go:174] Found /Users/andrew/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0810 16:42:02.585378 35411 cache.go:59] Finished verifying existence of preloaded tar for v1.21.2 on docker
I0810 16:42:02.585888 35411 profile.go:148] Saving config to /Users/andrew/.minikube/profiles/minikube/config.json ...
I0810 16:42:02.810791 35411 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
I0810 16:42:02.810819 35411 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
I0810 16:42:02.810830 35411 cache.go:205] Successfully downloaded all kic artifacts
I0810 16:42:02.810880 35411 start.go:313] acquiring machines lock for minikube: {Name:mk5f2981c62cf5230f07bd03c424e1d1eab1b8ab Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I0810 16:42:02.811990 35411 start.go:317] acquired machines lock for "minikube" in 1.080123ms
I0810 16:42:02.812027 35411 start.go:93] Skipping create...Using existing machine configuration
I0810 16:42:02.812037 35411 fix.go:55] fixHost starting:
I0810 16:42:02.812872 35411 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0810 16:42:03.009958 35411 fix.go:108] recreateIfNeeded on minikube: state=Stopped err=
W0810 16:42:03.009989 35411 fix.go:134] unexpected machine state, will restart:
I0810 16:42:03.034651 35411 out.go:165] 🔄 Restarting existing docker container for "minikube" ...
I0810 16:42:03.035170 35411 cli_runner.go:115] Run: docker start minikube
I0810 16:42:04.914930 35411 cli_runner.go:168] Completed: docker start minikube: (1.879681149s)
I0810 16:42:04.915156 35411 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0810 16:42:05.138083 35411 kic.go:420] container "minikube" state is running.
I0810 16:42:05.142074 35411 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0810 16:42:05.352810 35411 profile.go:148] Saving config to /Users/andrew/.minikube/profiles/minikube/config.json ...
I0810 16:42:05.356261 35411 machine.go:88] provisioning docker machine ...
I0810 16:42:05.356284 35411 ubuntu.go:169] provisioning hostname "minikube"
I0810 16:42:05.356429 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:05.542156 35411 main.go:130] libmachine: Using SSH client type: native
I0810 16:42:05.566730 35411 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 65158 }
I0810 16:42:05.566743 35411 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0810 16:42:05.592623 35411 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0810 16:42:08.777361 35411 main.go:130] libmachine: SSH cmd err, output: : minikube
I0810 16:42:08.777639 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:09.003208 35411 main.go:130] libmachine: Using SSH client type: native
I0810 16:42:09.004176 35411 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 65158 }
I0810 16:42:09.004189 35411 main.go:130] libmachine: About to run SSH command:
I0810 16:42:09.138867 35411 main.go:130] libmachine: SSH cmd err, output: :
I0810 16:42:09.138894 35411 ubuntu.go:175] set auth options {CertDir:/Users/andrew/.minikube CaCertPath:/Users/andrew/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/andrew/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/andrew/.minikube/machines/server.pem ServerKeyPath:/Users/andrew/.minikube/machines/server-key.pem ClientKeyPath:/Users/andrew/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/andrew/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/andrew/.minikube}
I0810 16:42:09.138911 35411 ubuntu.go:177] setting up certificates
I0810 16:42:09.138922 35411 provision.go:83] configureAuth start
I0810 16:42:09.139098 35411 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0810 16:42:09.335495 35411 provision.go:137] copyHostCerts
I0810 16:42:09.337852 35411 exec_runner.go:145] found /Users/andrew/.minikube/ca.pem, removing ...
I0810 16:42:09.337861 35411 exec_runner.go:190] rm: /Users/andrew/.minikube/ca.pem
I0810 16:42:09.342521 35411 exec_runner.go:152] cp: /Users/andrew/.minikube/certs/ca.pem --> /Users/andrew/.minikube/ca.pem (1078 bytes)
I0810 16:42:09.350919 35411 exec_runner.go:145] found /Users/andrew/.minikube/cert.pem, removing ...
I0810 16:42:09.350931 35411 exec_runner.go:190] rm: /Users/andrew/.minikube/cert.pem
I0810 16:42:09.352515 35411 exec_runner.go:152] cp: /Users/andrew/.minikube/certs/cert.pem --> /Users/andrew/.minikube/cert.pem (1123 bytes)
I0810 16:42:09.358675 35411 exec_runner.go:145] found /Users/andrew/.minikube/key.pem, removing ...
I0810 16:42:09.358681 35411 exec_runner.go:190] rm: /Users/andrew/.minikube/key.pem
I0810 16:42:09.359705 35411 exec_runner.go:152] cp: /Users/andrew/.minikube/certs/key.pem --> /Users/andrew/.minikube/key.pem (1679 bytes)
I0810 16:42:09.360337 35411 provision.go:111] generating server cert: /Users/andrew/.minikube/machines/server.pem ca-key=/Users/andrew/.minikube/certs/ca.pem private-key=/Users/andrew/.minikube/certs/ca-key.pem org=andrew.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0810 16:42:09.628923 35411 provision.go:171] copyRemoteCerts
I0810 16:42:09.630100 35411 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0810 16:42:09.630838 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:09.870994 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:09.969751 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0810 16:42:09.999480 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0810 16:42:10.024133 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0810 16:42:10.044165 35411 provision.go:86] duration metric: configureAuth took 905.212584ms
I0810 16:42:10.044176 35411 ubuntu.go:193] setting minikube options for container-runtime
I0810 16:42:10.046819 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:10.250648 35411 main.go:130] libmachine: Using SSH client type: native
I0810 16:42:10.251518 35411 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 65158 }
I0810 16:42:10.251524 35411 main.go:130] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0810 16:42:10.379055 35411 main.go:130] libmachine: SSH cmd err, output: : overlay
I0810 16:42:10.379067 35411 ubuntu.go:71] root file system type: overlay
I0810 16:42:10.379260 35411 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
I0810 16:42:10.379455 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:10.603058 35411 main.go:130] libmachine: Using SSH client type: native
I0810 16:42:10.604929 35411 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 65158 }
I0810 16:42:10.605072 35411 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0810 16:42:10.756933 35411 main.go:130] libmachine: SSH cmd err, output: : [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
This file is a systemd drop-in unit that inherits from the base dockerd configuration.
The base configuration already specifies an 'ExecStart=...' command. The first directive
here is to clear out that command inherited from the base configuration. Without this,
the command from the base configuration and the command specified here are treated as
a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
will catch this invalid input and refuse to start the service with an error like:
Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
Having non-zero Limit*s causes performance problems due to accounting overhead
in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Uncomment TasksMax if your systemd version supports it.
Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0810 16:42:10.772730 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube$'\thost.minikube.internal$ ' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0810 16:42:11.008427 35411 main.go:130] libmachine: Using SSH client type: native
I0810 16:42:11.009310 35411 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x44042c0] 0x4404280 [] 0s} 127.0.0.1 65158 }
I0810 16:42:11.009324 35411 main.go:130] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0810 16:42:11.150807 35411 main.go:130] libmachine: SSH cmd err, output: :
I0810 16:42:11.150828 35411 machine.go:91] provisioned docker machine in 5.794424925s
I0810 16:42:11.150840 35411 start.go:267] post-start starting for "minikube" (driver="docker")
I0810 16:42:11.150846 35411 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0810 16:42:11.151060 35411 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0810 16:42:11.151196 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:11.385379 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:11.483452 35411 ssh_runner.go:149] Run: cat /etc/os-release
I0810 16:42:11.491626 35411 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0810 16:42:11.491649 35411 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0810 16:42:11.491662 35411 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0810 16:42:11.491671 35411 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I0810 16:42:11.491680 35411 filesync.go:126] Scanning /Users/andrew/.minikube/addons for local assets ...
I0810 16:42:11.492265 35411 filesync.go:126] Scanning /Users/andrew/.minikube/files for local assets ...
I0810 16:42:11.492623 35411 start.go:270] post-start completed in 341.768779ms
I0810 16:42:11.492836 35411 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0810 16:42:11.492955 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:11.715025 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:11.826233 35411 fix.go:57] fixHost completed within 9.01397923s
I0810 16:42:11.826251 35411 start.go:80] releasing machines lock for "minikube", held for 9.014042436s
I0810 16:42:11.826568 35411 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0810 16:42:12.062795 35411 ssh_runner.go:149] Run: systemctl --version
I0810 16:42:12.062812 35411 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0810 16:42:12.062911 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:12.062937 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:12.289283 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:12.304897 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:12.384045 35411 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0810 16:42:12.634745 35411 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0810 16:42:12.655884 35411 cruntime.go:249] skipping containerd shutdown because we are bound to it
I0810 16:42:12.656536 35411 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0810 16:42:12.676216 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0810 16:42:12.696661 35411 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
I0810 16:42:12.788685 35411 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
I0810 16:42:12.866752 35411 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0810 16:42:12.878470 35411 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0810 16:42:12.941255 35411 ssh_runner.go:149] Run: sudo systemctl start docker
I0810 16:42:12.955713 35411 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0810 16:42:13.314420 35411 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0810 16:42:13.428718 35411 out.go:192] 🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
I0810 16:42:13.429150 35411 cli_runner.go:115] Run: docker exec -t minikube dig +short host.docker.internal
I0810 16:42:13.775661 35411 network.go:69] got host ip for mount in container by digging dns: 192.168.65.2
I0810 16:42:13.775879 35411 ssh_runner.go:149] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0810 16:42:13.781615 35411 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0810 16:42:13.796453 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0810 16:42:14.014527 35411 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker
I0810 16:42:14.014738 35411 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0810 16:42:14.063335 35411 docker.go:535] Got preloaded images: -- stdout --
875098767412.dkr.ecr.us-east-1.amazonaws.com/kudo-auth-service:develop
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
upmcenterprises/registry-creds:
-- /stdout --
I0810 16:42:14.063351 35411 docker.go:466] Images already preloaded, skipping extraction
I0810 16:42:14.063535 35411 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0810 16:42:14.110043 35411 docker.go:535] Got preloaded images: -- stdout --
875098767412.dkr.ecr.us-east-1.amazonaws.com/kudo-auth-service:develop
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.4.1
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/coredns/coredns:v1.8.0
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/metrics-scraper:v1.0.4
upmcenterprises/registry-creds:
-- /stdout --
I0810 16:42:14.110060 35411 cache_images.go:74] Images are preloaded, skipping loading
I0810 16:42:14.110258 35411 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0810 16:42:14.464347 35411 cni.go:93] Creating CNI manager for ""
I0810 16:42:14.464356 35411 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0810 16:42:14.464368 35411 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0810 16:42:14.464386 35411 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0810 16:42:14.464519 35411 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
ttl: 24h0m0s
usages:
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.21.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
I0810 16:42:14.464624 35411 kubeadm.go:909] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]$'\tcontrol-plane.minikube.internal$ ' "/etc/hosts"; echo "192.168.49.control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
config:
{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0810 16:42:14.464798 35411 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2
I0810 16:42:14.476091 35411 binaries.go:44] Found k8s binaries, skipping transfer
I0810 16:42:14.476267 35411 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0810 16:42:14.485940 35411 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0810 16:42:14.504763 35411 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0810 16:42:14.530254 35411 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes)
I0810 16:42:14.556363 35411 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0810 16:42:14.565672 35411 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v
I0810 16:42:14.582417 35411 certs.go:52] Setting up /Users/andrew/.minikube/profiles/minikube for IP: 192.168.49.2
I0810 16:42:14.586484 35411 certs.go:179] skipping minikubeCA CA generation: /Users/andrew/.minikube/ca.key
I0810 16:42:14.590110 35411 certs.go:179] skipping proxyClientCA CA generation: /Users/andrew/.minikube/proxy-client-ca.key
I0810 16:42:14.593777 35411 certs.go:290] skipping minikube-user signed cert generation: /Users/andrew/.minikube/profiles/minikube/client.key
I0810 16:42:14.597437 35411 certs.go:290] skipping minikube signed cert generation: /Users/andrew/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0810 16:42:14.600778 35411 certs.go:290] skipping aggregator signed cert generation: /Users/andrew/.minikube/profiles/minikube/proxy-client.key
I0810 16:42:14.607188 35411 certs.go:369] found cert: /Users/andrew/.minikube/certs/Users/andrew/.minikube/certs/ca-key.pem (1679 bytes)
I0810 16:42:14.608468 35411 certs.go:369] found cert: /Users/andrew/.minikube/certs/Users/andrew/.minikube/certs/ca.pem (1078 bytes)
I0810 16:42:14.609147 35411 certs.go:369] found cert: /Users/andrew/.minikube/certs/Users/andrew/.minikube/certs/cert.pem (1123 bytes)
I0810 16:42:14.609808 35411 certs.go:369] found cert: /Users/andrew/.minikube/certs/Users/andrew/.minikube/certs/key.pem (1679 bytes)
I0810 16:42:14.614151 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0810 16:42:14.646713 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0810 16:42:14.676792 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0810 16:42:14.707456 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0810 16:42:14.737963 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0810 16:42:14.764747 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0810 16:42:14.788101 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0810 16:42:14.808295 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0810 16:42:14.827208 35411 ssh_runner.go:316] scp /Users/andrew/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0810 16:42:14.847833 35411 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0810 16:42:14.863764 35411 ssh_runner.go:149] Run: openssl version
I0810 16:42:14.874335 35411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0810 16:42:14.885001 35411 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0810 16:42:14.890311 35411 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun 10 18:37 /usr/share/ca-certificates/minikubeCA.pem
I0810 16:42:14.890470 35411 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0810 16:42:14.897287 35411 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0810 16:42:14.905916 35411 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:1987 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true registry-creds:true storage-provisioner:true] CustomAddonImages:map[RegistryCreds:upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0810 16:42:14.906088 35411 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0810 16:42:14.953504 35411 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0810 16:42:14.963448 35411 kubeadm.go:401] found existing configuration files, will attempt cluster restart
I0810 16:42:14.964208 35411 kubeadm.go:600] restartCluster start
I0810 16:42:14.964375 35411 ssh_runner.go:149] Run: sudo test -d /data/minikube
I0810 16:42:14.973069 35411 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0810 16:42:14.973250 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0810 16:42:15.180615 35411 kubeconfig.go:93] found "minikube" server: "https://127.0.0.1:51672"
I0810 16:42:15.180631 35411 kubeconfig.go:117] verify returned: got: 127.0.0.1:51672, want: 127.0.0.1:65162
I0810 16:42:15.182052 35411 lock.go:36] WriteFile acquiring /Users/andrew/.kube/config: {Name:mk9cf28e3f619fdd103661b1633bb952dee37b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0810 16:42:15.260961 35411 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0810 16:42:15.271599 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:15.271788 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:15.287262 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:15.489817 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:15.490311 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:15.513241 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:15.688786 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:15.689056 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:15.712596 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:15.888772 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:15.889067 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:15.912113 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:16.089041 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:16.089399 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:16.113884 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:16.291314 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:16.291789 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:16.314276 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:16.490602 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:16.491048 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:16.513271 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:16.687560 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:16.687841 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:16.712001 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:16.888075 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:16.888326 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:16.908006 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:17.088964 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:17.089215 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:17.112470 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:17.289503 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:17.289998 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:17.312170 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:17.488184 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:17.488499 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:17.511853 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:17.690245 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:17.690632 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:17.714612 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:17.888710 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:17.889232 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:17.922787 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:18.088121 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:18.088389 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:18.112692 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:18.289407 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:18.289962 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:18.313897 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:18.313906 35411 api_server.go:164] Checking apiserver status ...
I0810 16:42:18.314122 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
W0810 16:42:18.334717 35411 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: Process exited with status 1
stdout:
stderr:
I0810 16:42:18.334725 35411 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
I0810 16:42:18.334738 35411 kubeadm.go:1032] stopping kube-system containers ...
I0810 16:42:18.334992 35411 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*(kube-system) --format={{.ID}}
I0810 16:42:18.394640 35411 docker.go:367] Stopping containers: [7eb70906ae03 a905b63df8b1 1ab6d4c60e11 0146967387a3 99633f0f0b07 a20cdef8315d 5ad45854bdd9 425e940a9d07 7f62fb1457b0 397807f9f594 229ff1bb8f58 05f4bc252791 b5f048955e8f 2f1676eb9c06 a171a07e9247 3244de780ede]
I0810 16:42:18.394859 35411 ssh_runner.go:149] Run: docker stop 7eb70906ae03 a905b63df8b1 1ab6d4c60e11 0146967387a3 99633f0f0b07 a20cdef8315d 5ad45854bdd9 425e940a9d07 7f62fb1457b0 397807f9f594 229ff1bb8f58 05f4bc252791 b5f048955e8f 2f1676eb9c06 a171a07e9247 3244de780ede
I0810 16:42:18.461557 35411 ssh_runner.go:149] Run: sudo systemctl stop kubelet
I0810 16:42:18.481838 35411 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0810 16:42:18.495408 35411 kubeadm.go:154] found existing configuration files:
-rw------- 1 root root 5643 Jul 26 18:33 /etc/kubernetes/admin.conf
-rw------- 1 root root 5652 Jul 26 18:33 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1971 Jul 26 18:33 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Jul 26 18:33 /etc/kubernetes/scheduler.conf
I0810 16:42:18.495910 35411 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0810 16:42:18.510198 35411 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0810 16:42:18.524967 35411 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0810 16:42:18.538878 35411 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0810 16:42:18.539179 35411 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0810 16:42:18.553050 35411 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0810 16:42:18.567447 35411 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0810 16:42:18.567701 35411 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0810 16:42:18.581669 35411 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0810 16:42:18.595342 35411 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0810 16:42:18.595351 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0810 16:42:19.033250 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0810 16:42:20.524137 35411 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.490839138s)
I0810 16:42:20.524157 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0810 16:42:20.854011 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0810 16:42:21.146621 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0810 16:42:21.455527 35411 api_server.go:50] waiting for apiserver process to appear ...
I0810 16:42:21.455983 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:21.992568 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:22.493271 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:22.991167 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:23.493057 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:23.995860 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:24.495976 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:24.993210 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:25.491211 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:25.993109 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:26.493255 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:26.995963 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:27.495860 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:27.992450 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:28.490724 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:28.992921 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:29.491249 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:29.991026 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:30.495319 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:30.665378 35411 api_server.go:70] duration metric: took 9.209660932s to wait for apiserver process to appear ...
I0810 16:42:30.665392 35411 api_server.go:86] waiting for apiserver healthz status ...
I0810 16:42:30.665408 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:30.668625 35411 api_server.go:255] stopped: https://127.0.0.1:65162/healthz: Get "https://127.0.0.1:65162/healthz": EOF
I0810 16:42:31.168834 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:31.173083 35411 api_server.go:255] stopped: https://127.0.0.1:65162/healthz: Get "https://127.0.0.1:65162/healthz": EOF
I0810 16:42:31.668812 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:31.671977 35411 api_server.go:255] stopped: https://127.0.0.1:65162/healthz: Get "https://127.0.0.1:65162/healthz": EOF
I0810 16:42:32.170092 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:37.171544 35411 api_server.go:255] stopped: https://127.0.0.1:65162/healthz: Get "https://127.0.0.1:65162/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0810 16:42:37.669030 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:37.724027 35411 api_server.go:265] https://127.0.0.1:65162/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User "system:anonymous" cannot get path "/healthz"","reason":"Forbidden","details":{},"code":403}
W0810 16:42:37.724045 35411 api_server.go:101] status: https://127.0.0.1:65162/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User "system:anonymous" cannot get path "/healthz"","reason":"Forbidden","details":{},"code":403}
I0810 16:42:38.171758 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:38.186440 35411 api_server.go:265] https://127.0.0.1:65162/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0810 16:42:38.186479 35411 api_server.go:101] status: https://127.0.0.1:65162/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0810 16:42:38.669245 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:38.683294 35411 api_server.go:265] https://127.0.0.1:65162/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0810 16:42:38.683319 35411 api_server.go:101] status: https://127.0.0.1:65162/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0810 16:42:39.173851 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:39.187734 35411 api_server.go:265] https://127.0.0.1:65162/healthz returned 200:
ok
I0810 16:42:39.200524 35411 api_server.go:139] control plane version: v1.21.2
I0810 16:42:39.200538 35411 api_server.go:129] duration metric: took 8.53496983s to wait for apiserver health ...
I0810 16:42:39.200544 35411 cni.go:93] Creating CNI manager for ""
I0810 16:42:39.200549 35411 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0810 16:42:39.200556 35411 system_pods.go:43] waiting for kube-system pods to appear ...
I0810 16:42:39.216103 35411 system_pods.go:59] 8 kube-system pods found
I0810 16:42:39.216121 35411 system_pods.go:61] "coredns-558bd4d5db-6f6jb" [7a4a0ced-6a0e-4960-a452-3749590cfab3] Running
I0810 16:42:39.216124 35411 system_pods.go:61] "etcd-minikube" [82fa0737-e988-4277-8e3d-73df554009b0] Running
I0810 16:42:39.216127 35411 system_pods.go:61] "kube-apiserver-minikube" [a26dcaa2-7d87-4c4b-b3f1-cffe3d7ad644] Running
I0810 16:42:39.216129 35411 system_pods.go:61] "kube-controller-manager-minikube" [d78e1f3e-9081-45f3-85d5-030bdccc367e] Running
I0810 16:42:39.216132 35411 system_pods.go:61] "kube-proxy-9bhxg" [0b9cdc61-4e34-4a51-9007-5c7e638c3b6d] Running
I0810 16:42:39.216134 35411 system_pods.go:61] "kube-scheduler-minikube" [5d36b757-78c2-44de-9d7f-b472c1c6ba8d] Running
I0810 16:42:39.216136 35411 system_pods.go:61] "registry-creds-85b974c7d7-h96t2" [1a07d0fe-5757-4949-9d7f-b93315132be1] Running
I0810 16:42:39.216142 35411 system_pods.go:61] "storage-provisioner" [469c09fb-dac8-4694-a886-789a517d1c30] Running
I0810 16:42:39.216146 35411 system_pods.go:74] duration metric: took 15.586088ms to wait for pod list to return data ...
I0810 16:42:39.216152 35411 node_conditions.go:102] verifying NodePressure condition ...
I0810 16:42:39.221935 35411 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0810 16:42:39.221946 35411 node_conditions.go:123] node cpu capacity is 6
I0810 16:42:39.221959 35411 node_conditions.go:105] duration metric: took 5.803781ms to run NodePressure ...
I0810 16:42:39.221968 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0810 16:42:39.671761 35411 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0810 16:42:39.692806 35411 ops.go:34] apiserver oom_adj: -16
I0810 16:42:39.692816 35411 kubeadm.go:604] restartCluster took 24.728095987s
I0810 16:42:39.692821 35411 kubeadm.go:392] StartCluster complete in 24.786405818s
I0810 16:42:39.692829 35411 settings.go:142] acquiring lock: {Name:mka6e461ca4594d4c1f9ab9586472614ea96df14 Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0810 16:42:39.693539 35411 settings.go:150] Updating kubeconfig: /Users/andrew/.kube/config
I0810 16:42:39.694701 35411 lock.go:36] WriteFile acquiring /Users/andrew/.kube/config: {Name:mk9cf28e3f619fdd103661b1633bb952dee37b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:}
I0810 16:42:39.763866 35411 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I0810 16:42:39.763896 35411 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}
I0810 16:42:39.763902 35411 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0810 16:42:39.764266 35411 addons.go:342] enableAddons start: toEnable=map[default-storageclass:true registry-creds:true storage-provisioner:true], additional=[]
I0810 16:42:39.787234 35411 out.go:165] 🔎 Verifying Kubernetes components...
I0810 16:42:39.787322 35411 addons.go:59] Setting storage-provisioner=true in profile "minikube"
I0810 16:42:39.787324 35411 addons.go:59] Setting registry-creds=true in profile "minikube"
I0810 16:42:39.787328 35411 addons.go:59] Setting default-storageclass=true in profile "minikube"
I0810 16:42:39.787650 35411 addons.go:135] Setting addon storage-provisioner=true in "minikube"
I0810 16:42:39.787658 35411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0810 16:42:39.787664 35411 addons.go:135] Setting addon registry-creds=true in "minikube"
W0810 16:42:39.787665 35411 addons.go:147] addon storage-provisioner should already be in state true
W0810 16:42:39.787670 35411 addons.go:147] addon registry-creds should already be in state true
I0810 16:42:39.787686 35411 host.go:66] Checking if "minikube" exists ...
I0810 16:42:39.787700 35411 host.go:66] Checking if "minikube" exists ...
I0810 16:42:39.788271 35411 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0810 16:42:39.805849 35411 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0810 16:42:39.809285 35411 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0810 16:42:39.809278 35411 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0810 16:42:40.304389 35411 start.go:710] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0810 16:42:40.304627 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0810 16:42:40.384564 35411 out.go:165] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0810 16:42:40.408812 35411 out.go:165] ▪ Using image upmcenterprises/registry-creds:1.10
I0810 16:42:40.384797 35411 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0810 16:42:40.408865 35411 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0810 16:42:40.409021 35411 addons.go:275] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I0810 16:42:40.409028 35411 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3296 bytes)
I0810 16:42:40.409171 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:40.409183 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:40.577897 35411 addons.go:135] Setting addon default-storageclass=true in "minikube"
W0810 16:42:40.577948 35411 addons.go:147] addon default-storageclass should already be in state true
I0810 16:42:40.577969 35411 host.go:66] Checking if "minikube" exists ...
I0810 16:42:40.580969 35411 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0810 16:42:40.703032 35411 api_server.go:50] waiting for apiserver process to appear ...
I0810 16:42:40.703351 35411 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0810 16:42:40.766828 35411 api_server.go:70] duration metric: took 1.002889636s to wait for apiserver process to appear ...
I0810 16:42:40.773106 35411 api_server.go:86] waiting for apiserver healthz status ...
I0810 16:42:40.773121 35411 api_server.go:239] Checking apiserver healthz at https://127.0.0.1:65162/healthz ...
I0810 16:42:40.787467 35411 api_server.go:265] https://127.0.0.1:65162/healthz returned 200:
ok
I0810 16:42:40.790624 35411 api_server.go:139] control plane version: v1.21.2
I0810 16:42:40.790633 35411 api_server.go:129] duration metric: took 17.519662ms to wait for apiserver health ...
I0810 16:42:40.790638 35411 system_pods.go:43] waiting for kube-system pods to appear ...
I0810 16:42:40.800890 35411 system_pods.go:59] 8 kube-system pods found
I0810 16:42:40.800904 35411 system_pods.go:61] "coredns-558bd4d5db-6f6jb" [7a4a0ced-6a0e-4960-a452-3749590cfab3] Running
I0810 16:42:40.800907 35411 system_pods.go:61] "etcd-minikube" [82fa0737-e988-4277-8e3d-73df554009b0] Running
I0810 16:42:40.800909 35411 system_pods.go:61] "kube-apiserver-minikube" [a26dcaa2-7d87-4c4b-b3f1-cffe3d7ad644] Running
I0810 16:42:40.800912 35411 system_pods.go:61] "kube-controller-manager-minikube" [d78e1f3e-9081-45f3-85d5-030bdccc367e] Running
I0810 16:42:40.800915 35411 system_pods.go:61] "kube-proxy-9bhxg" [0b9cdc61-4e34-4a51-9007-5c7e638c3b6d] Running
I0810 16:42:40.800917 35411 system_pods.go:61] "kube-scheduler-minikube" [5d36b757-78c2-44de-9d7f-b472c1c6ba8d] Running
I0810 16:42:40.800919 35411 system_pods.go:61] "registry-creds-85b974c7d7-h96t2" [1a07d0fe-5757-4949-9d7f-b93315132be1] Running
I0810 16:42:40.800922 35411 system_pods.go:61] "storage-provisioner" [469c09fb-dac8-4694-a886-789a517d1c30] Running
I0810 16:42:40.800925 35411 system_pods.go:74] duration metric: took 10.284025ms to wait for pod list to return data ...
I0810 16:42:40.800930 35411 kubeadm.go:547] duration metric: took 1.036996464s to wait for : map[apiserver:true system_pods:true] ...
I0810 16:42:40.800941 35411 node_conditions.go:102] verifying NodePressure condition ...
I0810 16:42:40.808071 35411 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
I0810 16:42:40.808086 35411 node_conditions.go:123] node cpu capacity is 6
I0810 16:42:40.808093 35411 node_conditions.go:105] duration metric: took 7.149265ms to run NodePressure ...
I0810 16:42:40.808100 35411 start.go:225] waiting for startup goroutines ...
I0810 16:42:40.818367 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:40.829246 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:40.907799 35411 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
I0810 16:42:40.907810 35411 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0810 16:42:40.907960 35411 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0810 16:42:40.939757 35411 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I0810 16:42:40.939757 35411 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0810 16:42:41.174448 35411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65158 SSHKeyPath:/Users/andrew/.minikube/machines/minikube/id_rsa Username:docker}
I0810 16:42:41.373717 35411 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0810 16:42:41.785163 35411 out.go:165] 🌟 Enabled addons: storage-provisioner, registry-creds, default-storageclass
I0810 16:42:41.785208 35411 addons.go:344] enableAddons completed in 2.021235067s
I0810 16:42:41.921316 35411 start.go:462] kubectl: 1.21.2, cluster: 1.21.2 (minor skew: 0)
I0810 16:42:41.941817 35411 out.go:165] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
==> Docker <==
-- Logs begin at Tue 2021-08-10 21:42:05 UTC, end at Wed 2021-08-11 18:00:34 UTC. --
Aug 11 17:42:19 minikube dockerd[215]: time="2021-08-11T17:42:19.705566900Z" level=info msg="ignoring event" container=ff3459cc587beca4bf219b2a4f25abd2d37a580774fd0bc8774493f18f635875 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:42:19 minikube dockerd[215]: time="2021-08-11T17:42:19.720737200Z" level=info msg="ignoring event" container=030152533b1df9b632a2c5b1c9a10b8a340b8d0a0e6f207ff88b6e364c4538ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:00 minikube dockerd[215]: time="2021-08-11T17:46:00.979303700Z" level=info msg="ignoring event" container=6102dd002297d4fa85db0d415d096a2649adf80e86053aafe6c652b177918903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:02 minikube dockerd[215]: time="2021-08-11T17:46:02.046382100Z" level=info msg="ignoring event" container=543fc5cdae3d31aceac24345072d151eaf72f96c5f59d4362406810429538310 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:02 minikube dockerd[215]: time="2021-08-11T17:46:02.261148800Z" level=info msg="ignoring event" container=fb4b129e1c0dc5217b370c46a49680c43a3c23f72820b4723e1127158e3a1ebe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:03 minikube dockerd[215]: time="2021-08-11T17:46:03.145120600Z" level=info msg="ignoring event" container=e9f87ee260f31bc088cd37de803ceb07b154600e771c16a6e1f21a756c3bfe38 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:07 minikube dockerd[215]: time="2021-08-11T17:46:07.274288500Z" level=info msg="ignoring event" container=f2be2c0424b22d4f8ba70c72a784098f83b397d377d937c1790dbb2e3f6cc256 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:07 minikube dockerd[215]: time="2021-08-11T17:46:07.274347200Z" level=info msg="ignoring event" container=c42b4f81b336b67ea89cedc1fbcae7f060ad7f45e126666d9277a41b371f59ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:07 minikube dockerd[215]: time="2021-08-11T17:46:07.288578900Z" level=info msg="ignoring event" container=da53b109fe39f2c95c749118bb6c3f906a43c578dbb5eb48c98b25850e5fc9e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:13 minikube dockerd[215]: time="2021-08-11T17:46:13.217258200Z" level=info msg="ignoring event" container=39dfde58ac753b8d091e2029ba6a4d26385a9b1c9258ae423de529519db034ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:13 minikube dockerd[215]: time="2021-08-11T17:46:13.690908800Z" level=info msg="ignoring event" container=de3a985d6431e7693ec590b82501c4ed2e68f4fabd8006c66fbf372d98caf160 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:14 minikube dockerd[215]: time="2021-08-11T17:46:14.557669400Z" level=info msg="ignoring event" container=4b96fdc15d5ed31b391363be0e4f40f9a46b34cabcb0c717497a44002fd46cba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:15 minikube dockerd[215]: time="2021-08-11T17:46:15.032123000Z" level=info msg="ignoring event" container=1bbea55ac14326e13ace31fc0c7dc0be8efc61d62ff7743820f821337a8fd0af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:16 minikube dockerd[215]: time="2021-08-11T17:46:16.069272400Z" level=info msg="ignoring event" container=ba21c1e8cf016992280096f792cf2ab063620634a7c7b36685b0baf6d3c5dd2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:18 minikube dockerd[215]: time="2021-08-11T17:46:18.108833700Z" level=info msg="ignoring event" container=b6b7bc1758a17371bcb67589c8d2305d2971a27f2eeb5fb8e003d301790a36db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:31 minikube dockerd[215]: time="2021-08-11T17:46:31.710843900Z" level=info msg="ignoring event" container=e1f1f979de6fc168e5ff557824ba531fb914fe123b91ced93363295b76af22bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:32 minikube dockerd[215]: time="2021-08-11T17:46:32.157194100Z" level=info msg="ignoring event" container=b5eefc1f9b8714c019d4d8439f849fd19cc2e50e0537b233c297512f49e2a935 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:36 minikube dockerd[215]: time="2021-08-11T17:46:36.018233600Z" level=info msg="ignoring event" container=49868488c9497ed2ee0a5d837fc09bd09b5765bcd641d5eb697fad4a58c04dc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:46:57 minikube dockerd[215]: time="2021-08-11T17:46:57.767387600Z" level=info msg="ignoring event" container=4e97c4face5fcefcdee2d1d388df4eeeaa1636ffcd8c09c7b6c7d54a1ba519f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:47:00 minikube dockerd[215]: time="2021-08-11T17:47:00.708483900Z" level=info msg="ignoring event" container=788b766d7ae7cb698958a37f58b2840bca3fa2b94ce5aac7e91e1cac459e0a8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:47:05 minikube dockerd[215]: time="2021-08-11T17:47:05.973642500Z" level=info msg="ignoring event" container=8a3ce1844b1cf58345f6bb40dace0f9f1c034a4beb022acb6e93551fd8a6a0e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:47:42 minikube dockerd[215]: time="2021-08-11T17:47:42.853071900Z" level=info msg="ignoring event" container=dd0ffe991b40bde177e5565f92fa7df55af47b1a9aaedcfd853203e993f47648 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:48:11 minikube dockerd[215]: time="2021-08-11T17:48:11.162520100Z" level=info msg="Attempting next endpoint for pull after error: Head https://875098767412.dkr.ecr.us-east-1.amazonaws.com/v2/kudo-kcp-dev-login/manifests/dso-555: dial tcp: lookup 875098767412.dkr.ecr.us-east-1.amazonaws.com on 192.168.65.2:53: read udp 192.168.49.2:59571->192.168.65.2:53: i/o timeout"
Aug 11 17:48:11 minikube dockerd[215]: time="2021-08-11T17:48:11.164986400Z" level=error msg="Handler for POST /v1.41/images/create returned error: Head https://875098767412.dkr.ecr.us-east-1.amazonaws.com/v2/kudo-kcp-dev-login/manifests/dso-555: dial tcp: lookup 875098767412.dkr.ecr.us-east-1.amazonaws.com on 192.168.65.2:53: read udp 192.168.49.2:59571->192.168.65.2:53: i/o timeout"
Aug 11 17:48:46 minikube dockerd[215]: time="2021-08-11T17:48:46.166879100Z" level=info msg="ignoring event" container=61d513b359d0763161f7020a6c09ff213bedac735585951f87bff665bf8b4eaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:49:11 minikube dockerd[215]: time="2021-08-11T17:49:11.629833300Z" level=info msg="ignoring event" container=737f11da5a5b8b7573353d58b0f08bccb3481b138ba503e579409489a9a53d94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:49:46 minikube dockerd[215]: time="2021-08-11T17:49:46.935403800Z" level=info msg="ignoring event" container=8fa4a9647bad9818ef44fa625c9da1838e7064b294fb093241d782f72246dffe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:49:46 minikube dockerd[215]: time="2021-08-11T17:49:46.946037600Z" level=info msg="ignoring event" container=6b3a940c0654e326af0773738a69a6ba4bc02ab795cd97d92495a2f8a7ec0fb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:49:54 minikube dockerd[215]: time="2021-08-11T17:49:54.501896200Z" level=info msg="ignoring event" container=aa24d3058836fa84e9a79c0c75be31e9f9dd58bbda6578f404dc614e2c208af6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:49:56 minikube dockerd[215]: time="2021-08-11T17:49:56.008563300Z" level=info msg="ignoring event" container=242b32a31b5c75dbab89a794e25094cead38882ef1fbc41e4bb953b64b184bdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:03 minikube dockerd[215]: time="2021-08-11T17:50:03.013080300Z" level=info msg="ignoring event" container=085843be4bc507db4da7cd293e674ccb213c90baafef80353a569504787a237c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:12 minikube dockerd[215]: time="2021-08-11T17:50:12.158841500Z" level=info msg="ignoring event" container=fde507346ca24e0d8c06d598b0ed9008654aa581600eb83e488c445c17450aed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:13 minikube dockerd[215]: time="2021-08-11T17:50:13.632715900Z" level=info msg="ignoring event" container=307024bc7afca0bead3804848964fa2304482618c26722f618bd0bf65c3b52f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:16 minikube dockerd[215]: time="2021-08-11T17:50:16.546224500Z" level=info msg="Container af4714f60c056497f3ab7454f14201c46ee3690391e18e2f7fc00754e6ef73f9 failed to exit within 30 seconds of signal 15 - using the force"
Aug 11 17:50:16 minikube dockerd[215]: time="2021-08-11T17:50:16.611011000Z" level=info msg="ignoring event" container=af4714f60c056497f3ab7454f14201c46ee3690391e18e2f7fc00754e6ef73f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:16 minikube dockerd[215]: time="2021-08-11T17:50:16.678968000Z" level=info msg="ignoring event" container=98aedfafe1068a3ca7d326f5f26b52006cf01a5913b04b5add27ec6e033c6463 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:35 minikube dockerd[215]: time="2021-08-11T17:50:35.275859600Z" level=info msg="ignoring event" container=054f5d42b79d12c5d4890a5aefb85ee0c7ef11417ebf1c7aad40678c6288304e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:50:41 minikube dockerd[215]: time="2021-08-11T17:50:41.684575300Z" level=info msg="ignoring event" container=14829b855b2751a1f970799809e0d894c70b36301b5c36e81e785525862e0f1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:51:12 minikube dockerd[215]: time="2021-08-11T17:51:12.935211200Z" level=info msg="ignoring event" container=a4b32665fc89ecfde9e41f4781562677a3c60c7ee0cf3b64c5fb247ce4071bfc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:51:22 minikube dockerd[215]: time="2021-08-11T17:51:22.412527100Z" level=info msg="ignoring event" container=b7e03e28422f946d3c5a6411fba0f29fa3bf1a276afe9981cffb7e8c840d6a6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:52:05 minikube dockerd[215]: time="2021-08-11T17:52:05.067067100Z" level=info msg="ignoring event" container=f4466748cd0c699135f7897019e48041a7c7d64b605329c6b012128111f7926e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:52:36 minikube dockerd[215]: time="2021-08-11T17:52:36.859384500Z" level=info msg="ignoring event" container=1991231100d154c3af2c1c7945736149f5fb975a136309bbb51c57e5f62ae662 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:52:36 minikube dockerd[215]: time="2021-08-11T17:52:36.859468500Z" level=info msg="ignoring event" container=6e8ad8f9f9fd4c6a58c0b7a2a4e60db88bdde224bf94796362eb1572591c0b45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:52:46 minikube dockerd[215]: time="2021-08-11T17:52:46.111105300Z" level=info msg="ignoring event" container=31b6ab22e0c3927b16e110b9d429cb6da83b1c6a4b54228a080b9b10b72d1e3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:52:47 minikube dockerd[215]: time="2021-08-11T17:52:47.466673000Z" level=info msg="ignoring event" container=94b96c238f960e4c316560de22334e9870948b930ffb4f9466eb6368b251c16e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:53:00 minikube dockerd[215]: time="2021-08-11T17:53:00.414695200Z" level=info msg="ignoring event" container=e35628ea1ec494097f007c48f9990a7ca7c2b4fe950f1ecb17b4cebb73608924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:53:06 minikube dockerd[215]: time="2021-08-11T17:53:06.352354600Z" level=info msg="Container 9e9a1d7f4b62b0eac99c9e1bf3bcfd4f83580369efc5b690cf6bf27a4f19f0ca failed to exit within 30 seconds of signal 15 - using the force"
Aug 11 17:53:06 minikube dockerd[215]: time="2021-08-11T17:53:06.401411700Z" level=info msg="ignoring event" container=9e9a1d7f4b62b0eac99c9e1bf3bcfd4f83580369efc5b690cf6bf27a4f19f0ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:53:06 minikube dockerd[215]: time="2021-08-11T17:53:06.465994000Z" level=info msg="ignoring event" container=5ef015700410852bd35c281ee66f30de9dc089ddecb0ebe9b149aa30b3bc7049 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:53:28 minikube dockerd[215]: time="2021-08-11T17:53:28.383028300Z" level=info msg="ignoring event" container=9fa0ffce2e4c233ab3dde14b367d174cf079045afd7f5821432b6668ed8d27fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:54:10 minikube dockerd[215]: time="2021-08-11T17:54:10.337495300Z" level=info msg="ignoring event" container=5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:54:40 minikube dockerd[215]: time="2021-08-11T17:54:40.682505600Z" level=info msg="ignoring event" container=0e2048fa63147ec26a70ae1b4cb4d47d2047f69592d42b11a937bf0ffc1796dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:54:46 minikube dockerd[215]: time="2021-08-11T17:54:46.566395800Z" level=info msg="ignoring event" container=4a35c1b85b8d21b25ea5e735624a426e3ef672188e3d967539ee4cdf6a527748 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:54:47 minikube dockerd[215]: time="2021-08-11T17:54:47.834133200Z" level=info msg="ignoring event" container=16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:55:10 minikube dockerd[215]: time="2021-08-11T17:55:10.198284200Z" level=info msg="Container 0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e failed to exit within 30 seconds of signal 15 - using the force"
Aug 11 17:55:10 minikube dockerd[215]: time="2021-08-11T17:55:10.209080200Z" level=info msg="Container 06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16 failed to exit within 30 seconds of signal 15 - using the force"
Aug 11 17:55:10 minikube dockerd[215]: time="2021-08-11T17:55:10.281541500Z" level=info msg="ignoring event" container=0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:55:10 minikube dockerd[215]: time="2021-08-11T17:55:10.281586800Z" level=info msg="ignoring event" container=06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:55:10 minikube dockerd[215]: time="2021-08-11T17:55:10.408732000Z" level=info msg="ignoring event" container=d5ca13b99e63cbbc3597787a78b01f84b8d35fdbd780173b78eb5e5e31bb8f7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 11 17:55:10 minikube dockerd[215]: time="2021-08-11T17:55:10.408872500Z" level=info msg="ignoring event" container=56daeba96f87d25e1e86de7a51f8e7130828582deab9be1e333b6709d09166e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
60bf3e43da404 875098767412.dkr.ecr.us-east-1.amazonaws.com/kudo-kcp-meeting-app@sha256:39c325a2b59b20d8f0b0c97dfb6f2781d872652731321f27bca89d61d7a09821 5 minutes ago Running meetings 2 c240a97801b37
16bc262ad1016 875098767412.dkr.ecr.us-east-1.amazonaws.com/kudo-kcp-meeting-app@sha256:39c325a2b59b20d8f0b0c97dfb6f2781d872652731321f27bca89d61d7a09821 5 minutes ago Exited meetings 1 c240a97801b37
2d8bb2abc2345 875098767412.dkr.ecr.us-east-1.amazonaws.com/kudo-kcp-dev-login@sha256:b51c29611b4605f18bc85ec76ec915475d048ba876fcc7a6d5ec5408b0d56838 5 minutes ago Running dev-login 0 7b6c68238d2cd
63984a21e9133 875098767412.dkr.ecr.us-east-1.amazonaws.com/kcp-hydra@sha256:44318c924f1b7fa370e53dc4098ba0a5fcf3792154bed3daa9713fc7c2df0f1f 5 minutes ago Running hydra 0 3ed52d81849b4
378df0f3ce200 6e38f40d628db 20 hours ago Running storage-provisioner 2 15de54b820e3c
61c7576e859d0 a6ebd1c1ad981 20 hours ago Running kube-proxy 1 361c4b8874b99
ce5fe34adb9e7 a2fd0654e5bae 20 hours ago Running registry-creds 1 8147b6338fb3c
f6330ff967210 296a6d5035e2d 20 hours ago Running coredns 1 9c1f36119f19a
01a3799e1affc 6e38f40d628db 20 hours ago Exited storage-provisioner 1 15de54b820e3c
444f9d5def839 f917b8c8f55b7 20 hours ago Running kube-scheduler 1 aa96ce8be5e71
b8ad6f8a8e6f3 0369cf4303ffd 20 hours ago Running etcd 1 c8540aed040f8
2fdd4cd647a79 106ff58d43082 20 hours ago Running kube-apiserver 1 e4f0eb3272017
aa431070033bc ae24db9aa2cc0 20 hours ago Running kube-controller-manager 1 ada115975a4c3
7eb70906ae037 upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 2 weeks ago Exited registry-creds 0 a905b63df8b12
99633f0f0b073 296a6d5035e2d 2 weeks ago Exited coredns 0 a20cdef8315d6
5ad45854bdd97 a6ebd1c1ad981 2 weeks ago Exited kube-proxy 0 425e940a9d07f
7f62fb1457b0b 0369cf4303ffd 2 weeks ago Exited etcd 0 05f4bc252791a
397807f9f5945 ae24db9aa2cc0 2 weeks ago Exited kube-controller-manager 0 2f1676eb9c062
229ff1bb8f586 106ff58d43082 2 weeks ago Exited kube-apiserver 0 a171a07e92475
b5f048955e8fc f917b8c8f55b7 2 weeks ago Exited kube-scheduler 0 3244de780eded
==> coredns [99633f0f0b07] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [f6330ff96721] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
CoreDNS-1.8.0
linux/amd64, go1.15.3, 054c9ae
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0810 21:43:14.320742 1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (10-Aug-2021 21:42:44.311) (total time: 30042ms):
Trace[2019727887]: [30.042662s] [30.042662s] END
E0810 21:43:14.320835 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0810 21:43:14.320983 1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (10-Aug-2021 21:42:44.311) (total time: 30043ms):
Trace[939984059]: [30.0430101s] [30.0430101s] END
E0810 21:43:14.321067 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0810 21:43:14.321149 1 trace.go:205] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (10-Aug-2021 21:42:44.311) (total time: 30042ms):
Trace[1427131847]: [30.0425135s] [30.0425135s] END
E0810 21:43:14.321255 1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: minikube
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=a03fbcf166e6f74ef224d4a63be4277d017bb62e
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2021_07_26T13_33_27_0700
minikube.k8s.io/version=v1.22.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 26 Jul 2021 18:33:23 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Wed, 11 Aug 2021 18:00:24 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Wed, 11 Aug 2021 17:59:52 +0000 Mon, 26 Jul 2021 18:33:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 11 Aug 2021 17:59:52 +0000 Mon, 26 Jul 2021 18:33:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 11 Aug 2021 17:59:52 +0000 Mon, 26 Jul 2021 18:33:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 11 Aug 2021 17:59:52 +0000 Mon, 26 Jul 2021 18:33:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 6
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2034968Ki
pods: 110
Allocatable:
cpu: 6
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2034968Ki
pods: 110
System Info:
Machine ID: 760e67beb8554645829f2357c8eb4ae7
System UUID: 3aff117e-514c-401a-a16e-42f640775faa
Boot ID: 3eebc0be-3428-4aa3-ab0b-3308c274b006
Kernel Version: 5.10.25-linuxkit
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.7
Kubelet Version: v1.21.2
Kube-Proxy Version: v1.21.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
default dev-login-5f7c84dcff-qbqcm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m52s
default hydra-6f6468fb4c-sz54x 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m52s
default meetings-5db87dbdff-wl5pm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m52s
kube-system coredns-558bd4d5db-6f6jb 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 15d
kube-system etcd-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 15d
kube-system kube-apiserver-minikube 250m (4%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15d
kube-system kube-controller-manager-minikube 200m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15d
kube-system kube-proxy-9bhxg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15d
kube-system kube-scheduler-minikube 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15d
kube-system registry-creds-85b974c7d7-h96t2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15d
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
cpu 750m (12%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
==> dmesg <==
[Aug10 12:56] ERROR: earlyprintk= earlyser already used
[ +0.000000] ERROR: earlyprintk= earlyser already used
[ +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0x7E, should be 0xDB (20200925/tbprint-173)
[ +0.203540] Support mounting host directories into pods #2
[ +0.063084] Support kubernetes dashboard. #3
[ +0.063924] Support creating a VM inside virtualbox on osx #4
[ +0.064007] Multiple platform support #5
[ +1.967276] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.034435] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
[ +0.001783] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
[ +4.040373] grpcfuse: loading out-of-tree module taints kernel.
[Aug10 14:24] Hangcheck: hangcheck value past margin!
[ +0.000043] clocksource: timekeeping watchdog on CPU0: Marking clocksource 'tsc' as unstable because the skew is too large:
[ +0.009229] clocksource: 'hpet' wd_now: 26f996f9 wd_last: 259dc06c mask: ffffffff
[ +0.004967] clocksource: 'tsc' cs_now: 30a74110f014 cs_last: 2e862b2eb048 mask: ffffffffffffffff
[ +0.007327] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
[Aug10 16:05] hrtimer: interrupt took 4847800 ns
==> etcd [7f62fb1457b0] <==
2021-08-03 14:30:51.874441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:31:01.876149 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:31:11.842072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:31:21.841501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:31:31.842193 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:31:41.807286 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:31:43.129438 I | mvcc: store.index: compact 60848
2021-08-03 14:31:43.130122 I | mvcc: finished scheduled compaction at 60848 (took 380.7µs)
2021-08-03 14:31:51.806725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:32:01.807445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:32:11.774697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:32:21.772679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:32:31.773443 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:32:41.739558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:32:51.739169 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:33:01.739798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:33:11.705356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:33:21.706641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:33:31.706626 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:33:41.671832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:33:51.671581 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:34:01.672386 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:34:11.638417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:34:21.637244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:34:31.638071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:34:41.604671 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:34:51.604819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:35:01.604085 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:35:11.571170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:35:21.571116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:35:31.570868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:35:41.536393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:35:51.537101 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:36:01.537462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:36:11.502460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:36:21.501860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:36:31.501801 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:36:41.468885 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:36:42.795591 I | mvcc: store.index: compact 61057
2021-08-03 14:36:42.796165 I | mvcc: finished scheduled compaction at 61057 (took 339.4µs)
2021-08-03 14:36:51.468050 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:37:01.468532 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:37:11.433848 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:37:21.434265 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:37:31.435340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:37:41.400185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:37:51.400206 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:38:01.399866 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:38:11.369417 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:38:21.368519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:38:31.368724 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:38:41.332870 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:38:51.332964 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:39:01.333163 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:39:11.298921 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:39:21.298935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-03 14:39:29.685604 N | pkg/osutil: received terminated signal, shutting down...
2021-08-03 14:39:29.885054 I | etcdserver: skipped leadership transfer for single voting member cluster
WARNING: 2021/08/03 14:39:29 grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: operation was canceled". Reconnecting...
WARNING: 2021/08/03 14:39:29 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
==> etcd [b8ad6f8a8e6f] <==
2021-08-11 17:51:03.686620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:51:13.651665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:51:23.651488 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:51:33.652093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:51:43.715258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:51:53.716125 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:52:03.715718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:52:13.680258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:52:23.681495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:52:33.681602 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:52:43.650880 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:52:53.647524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:53:03.647581 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:53:13.613781 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:53:23.613059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:53:33.613487 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:53:43.578448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:53:53.577843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:54:03.577940 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:54:13.545441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:54:23.543697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:54:33.544530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:54:43.510518 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:54:53.509194 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:55:03.509129 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:55:13.475739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:55:23.474961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:55:33.474465 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:55:35.771367 I | mvcc: store.index: compact 76067
2021-08-11 17:55:35.796146 I | mvcc: finished scheduled compaction at 76067 (took 24.226ms)
2021-08-11 17:55:43.439996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:55:53.442907 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:56:03.440416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:56:13.406415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:56:23.406328 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:56:33.406226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:56:43.372416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:56:53.371790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:57:03.371699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:57:13.338705 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:57:23.337638 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:57:33.338443 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:57:43.303759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:57:53.302508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:58:03.302568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:58:13.268660 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:58:23.268166 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:58:33.268561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:58:43.234376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:58:53.232910 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:59:03.234526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:59:13.199387 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:59:23.199693 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:59:33.199615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:59:43.165679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 17:59:53.164887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 18:00:03.166161 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 18:00:13.129889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 18:00:23.131043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-08-11 18:00:33.130641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
18:00:35 up 1 day, 5:03, 0 users, load average: 0.56, 0.56, 0.92
Linux minikube 5.10.25-linuxkit Need a reliable and low latency local cluster setup for Kubernetes #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
==> kube-apiserver [229ff1bb8f58] <==
W0803 14:39:30.848810 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.848931 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.848962 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.849042 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.944201 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.944424 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.945232 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.945551 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.951396 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.976469 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.976469 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.976968 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977155 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977348 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977434 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977523 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977550 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977605 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977618 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977668 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977810 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977918 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.977995 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978016 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978061 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978118 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978157 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978188 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978287 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978467 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.978806 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.979067 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.979091 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.979290 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.979618 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.979633 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.980224 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.980224 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.980232 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.980923 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.980973 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981078 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981085 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981092 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981401 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981436 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981718 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.981938 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.982556 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.982597 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.982854 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.983106 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.983143 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.983260 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.983690 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.983955 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.984140 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.984243 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.984332 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0803 14:39:30.983703 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
==> kube-apiserver [2fdd4cd647a7] <==
I0811 17:49:05.024529 1 client.go:360] parsed scheme: "passthrough"
I0811 17:49:05.024632 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:49:05.024662 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:49:42.440871 1 client.go:360] parsed scheme: "passthrough"
I0811 17:49:42.440988 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:49:42.441027 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:50:14.736881 1 client.go:360] parsed scheme: "passthrough"
I0811 17:50:14.736950 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:50:14.736989 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:50:52.809660 1 client.go:360] parsed scheme: "passthrough"
I0811 17:50:52.809765 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:50:52.809791 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:51:24.209453 1 client.go:360] parsed scheme: "passthrough"
I0811 17:51:24.209566 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:51:24.209606 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:51:55.696927 1 client.go:360] parsed scheme: "passthrough"
I0811 17:51:55.697130 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:51:55.697194 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:52:33.863806 1 client.go:360] parsed scheme: "passthrough"
I0811 17:52:33.863924 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:52:33.864036 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:53:18.147728 1 client.go:360] parsed scheme: "passthrough"
I0811 17:53:18.147792 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:53:18.147814 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:53:53.931019 1 client.go:360] parsed scheme: "passthrough"
I0811 17:53:53.931224 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:53:53.931260 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:54:27.343764 1 client.go:360] parsed scheme: "passthrough"
I0811 17:54:27.344034 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:54:27.344090 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:55:05.860828 1 client.go:360] parsed scheme: "passthrough"
I0811 17:55:05.860892 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:55:05.860945 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:55:27.328224 1 trace.go:205] Trace[87906880]: "Get" url:/api/v1/namespaces/default/pods/meetings-5db87dbdff-wl5pm/log,user-agent:kubectl/v1.21.2 (darwin/amd64) kubernetes/092fbfb,client:192.168.49.1,accept:application/json, /,protocol:HTTP/2.0 (11-Aug-2021 17:55:25.179) (total time: 2143ms):
Trace[87906880]: ---"Transformed response object" 2141ms (17:55:00.323)
Trace[87906880]: [2.1436949s] [2.1436949s] END
I0811 17:55:47.816739 1 client.go:360] parsed scheme: "passthrough"
I0811 17:55:47.816846 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:55:47.816886 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:56:26.541646 1 client.go:360] parsed scheme: "passthrough"
I0811 17:56:26.541690 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:56:26.541717 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:56:59.474673 1 client.go:360] parsed scheme: "passthrough"
I0811 17:56:59.474790 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:56:59.474829 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:57:33.429152 1 client.go:360] parsed scheme: "passthrough"
I0811 17:57:33.429372 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:57:33.429415 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:58:14.230508 1 client.go:360] parsed scheme: "passthrough"
I0811 17:58:14.230625 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:58:14.230656 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:58:47.606244 1 client.go:360] parsed scheme: "passthrough"
I0811 17:58:47.606359 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:58:47.606398 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 17:59:32.438112 1 client.go:360] parsed scheme: "passthrough"
I0811 17:59:32.438420 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 17:59:32.438602 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0811 18:00:16.779918 1 client.go:360] parsed scheme: "passthrough"
I0811 18:00:16.780045 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0811 18:00:16.780086 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [397807f9f594] <==
I0726 18:33:56.499189 1 event.go:291] "Event occurred" object="default/kudo-auth-service-b5fb4f998" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-b5fb4f998-w8hwf"
I0726 18:36:19.089177 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:36:19.106508 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-gckd5"
I0726 18:36:41.049078 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:36:41.056554 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-mr5g7"
I0726 18:38:37.634266 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:38:37.650814 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-mdq2n"
I0726 18:39:03.871529 1 event.go:291] "Event occurred" object="kube-system/registry-creds" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set registry-creds-85b974c7d7 to 1"
I0726 18:39:03.934580 1 event.go:291] "Event occurred" object="kube-system/registry-creds-85b974c7d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-creds-85b974c7d7-h96t2"
I0726 18:39:14.272970 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-6446f88f56 to 1"
I0726 18:39:14.295636 1 event.go:291] "Event occurred" object="default/kudo-auth-service-6446f88f56" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-6446f88f56-l2qzr"
I0726 18:52:17.847676 1 event.go:291] "Event occurred" object="default/kudo-auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kudo-auth-service-5587d67fbb to 1"
I0726 18:52:17.862094 1 event.go:291] "Event occurred" object="default/kudo-auth-service-5587d67fbb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kudo-auth-service-5587d67fbb-9pc94"
I0726 18:54:43.922557 1 event.go:291] "Event occurred" object="default/auth-service" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set auth-service-5544f8f75d to 1"
I0726 18:54:43.930738 1 event.go:291] "Event occurred" object="default/auth-service-5544f8f75d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: auth-service-5544f8f75d-z2zw9"
I0726 20:33:18.617715 1 cleaner.go:180] Cleaning CSR "csr-svcqq" as it is more than 1h0m0s old and approved.
E0726 23:09:48.109494 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0726 23:09:48.116277 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 02:00:13.481544 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0727 02:00:13.481551 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 12:44:30.495040 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0727 12:44:30.495023 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 15:03:58.528707 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0727 15:03:58.528729 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0727 17:51:28.109264 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0727 17:51:28.125556 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 01:33:17.978345 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 01:33:18.037941 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 12:47:52.804388 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 12:47:52.904316 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 14:55:43.199852 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 14:55:43.246170 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 17:58:46.932393 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0728 17:58:47.019846 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0728 21:40:04.673774 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0728 21:40:04.673846 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0729 12:56:50.192441 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0729 12:56:50.611737 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0729 19:03:41.701575 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0729 19:03:41.701889 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0730 07:06:53.151016 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0730 07:06:53.216307 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0730 16:50:31.279367 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0730 16:50:31.653367 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0730 17:59:19.014468 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0730 17:59:19.339104 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0731 00:30:03.012033 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0731 00:30:03.295075 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0801 23:51:31.749627 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0801 23:51:31.749627 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0802 09:45:07.807829 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0802 09:45:07.807927 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0802 17:05:29.897665 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0802 17:05:29.897665 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0802 18:08:38.833639 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0802 18:08:38.834050 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0802 21:33:36.756468 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0802 21:33:36.756495 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0803 12:59:17.531844 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0803 12:59:17.531889 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
==> kube-controller-manager [aa431070033b] <==
I0810 21:42:51.357652 1 shared_informer.go:247] Caches are synced for deployment
I0810 21:42:51.793142 1 shared_informer.go:247] Caches are synced for garbage collector
I0810 21:42:51.807581 1 shared_informer.go:247] Caches are synced for garbage collector
I0810 21:42:51.807646 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
E0810 23:42:11.072566 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0810 23:42:11.074575 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
W0811 12:58:12.058202 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
E0811 12:58:12.058202 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
E0811 17:12:03.840496 1 resource_quota_controller.go:409] failed to discover resources: Unauthorized
W0811 17:12:03.840526 1 garbagecollector.go:705] failed to discover preferred resources: Unauthorized
I0811 17:26:22.433234 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-69688d4579 to 1"
I0811 17:26:22.433431 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-667c95f5b8 to 1"
I0811 17:26:22.496612 1 event.go:291] "Event occurred" object="default/dev-login-69688d4579" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-69688d4579-4gdhz"
I0811 17:26:22.499530 1 event.go:291] "Event occurred" object="default/hydra-667c95f5b8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-667c95f5b8-t45ht"
I0811 17:27:13.345094 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-77954dc59b to 1"
I0811 17:27:13.393715 1 event.go:291] "Event occurred" object="default/meetings-77954dc59b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-77954dc59b-nfpmk"
I0811 17:28:01.292815 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-8bcb8d68f to 1"
I0811 17:28:01.293365 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-6d485cb46c to 1"
I0811 17:28:01.389412 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5796658bb4 to 1"
I0811 17:28:01.390288 1 event.go:291] "Event occurred" object="default/dev-login-6d485cb46c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-6d485cb46c-472jc"
I0811 17:28:01.391926 1 event.go:291] "Event occurred" object="default/hydra-8bcb8d68f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-8bcb8d68f-s4q2q"
I0811 17:28:01.488154 1 event.go:291] "Event occurred" object="default/meetings-5796658bb4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5796658bb4-w88ch"
I0811 17:41:55.828225 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:41:55.828319 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:41:55.830248 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:41:55.859355 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-fsgt6"
I0811 17:41:55.859440 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-b9rbz"
I0811 17:41:55.865379 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-4xbll"
I0811 17:42:20.753579 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:42:20.768368 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-w2qlb"
I0811 17:42:20.774364 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:42:20.788463 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-b2r28"
I0811 17:42:20.891258 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:42:20.921016 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-jmwwm"
I0811 17:46:11.289941 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:46:11.290025 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:46:11.291849 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:46:11.305681 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-2gtw7"
I0811 17:46:11.308453 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-jbwnz"
I0811 17:46:11.311909 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-47dct"
I0811 17:46:18.160521 1 event.go:291] "Event occurred" object="default/meetings" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint default/meetings: Operation cannot be fulfilled on endpoints "meetings": the object has been modified; please apply your changes to the latest version and try again"
I0811 17:46:21.167364 1 event.go:291] "Event occurred" object="default/dev-login" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint default/dev-login: Operation cannot be fulfilled on endpoints "dev-login": the object has been modified; please apply your changes to the latest version and try again"
I0811 17:49:51.011376 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:49:51.017527 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-568d5bc6fb to 1"
I0811 17:49:51.029204 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-7nnq8"
I0811 17:49:51.029764 1 event.go:291] "Event occurred" object="default/dev-login-568d5bc6fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-568d5bc6fb-6xjvz"
I0811 17:49:51.040074 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:49:51.068439 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-zbwwl"
I0811 17:52:41.039085 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6699d9db88 to 1"
I0811 17:52:41.049661 1 event.go:291] "Event occurred" object="default/hydra-6699d9db88" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6699d9db88-dlw7p"
I0811 17:52:41.051425 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-d9c6b5dcb to 1"
I0811 17:52:41.060371 1 event.go:291] "Event occurred" object="default/dev-login-d9c6b5dcb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-d9c6b5dcb-ndz5v"
I0811 17:52:41.158308 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5fd8b9d97f to 1"
I0811 17:52:41.171037 1 event.go:291] "Event occurred" object="default/meetings-5fd8b9d97f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5fd8b9d97f-fx7f5"
I0811 17:54:42.454143 1 event.go:291] "Event occurred" object="default/hydra" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hydra-6f6468fb4c to 1"
I0811 17:54:42.467471 1 event.go:291] "Event occurred" object="default/hydra-6f6468fb4c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hydra-6f6468fb4c-sz54x"
I0811 17:54:42.467508 1 event.go:291] "Event occurred" object="default/dev-login" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dev-login-5f7c84dcff to 1"
I0811 17:54:42.471722 1 event.go:291] "Event occurred" object="default/dev-login-5f7c84dcff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dev-login-5f7c84dcff-qbqcm"
I0811 17:54:42.518243 1 event.go:291] "Event occurred" object="default/meetings" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set meetings-5db87dbdff to 1"
I0811 17:54:42.532086 1 event.go:291] "Event occurred" object="default/meetings-5db87dbdff" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: meetings-5db87dbdff-wl5pm"
==> kube-proxy [5ad45854bdd9] <==
W0730 14:18:40.714473 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 14:26:40.182694 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 14:32:05.794435 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 16:51:05.164603 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0730 17:59:56.535691 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 12:58:15.333764 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:08:02.628292 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:14:49.137170 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:22:14.704450 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:29:58.151573 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:35:04.862006 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:44:07.247538 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:51:05.802996 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 13:58:04.331950 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:04:00.925246 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:11:54.383570 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:19:21.893873 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:26:20.423755 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:33:12.989753 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:42:05.387809 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 14:51:10.734203 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:00:52.047985 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:05:52.739934 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:14:06.199313 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:19:19.813619 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 15:26:38.338285 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:05:59.257282 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:13:39.742575 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:20:03.295155 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 17:25:47.882865 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:13:46.954751 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:33:37.167942 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:41:55.576842 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:47:28.193669 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 18:54:52.706015 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:00:11.327645 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:08:24.768839 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:15:55.245554 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:25:02.617731 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:31:11.200563 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:36:34.926007 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:43:22.448430 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 19:53:09.797808 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:00:37.284870 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:10:32.618793 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:19:11.034811 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0802 20:25:41.590346 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:00:05.194186 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:09:53.486659 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:16:33.028353 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:21:40.676508 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:28:54.147182 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:34:42.867134 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:40:04.530065 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:50:02.853496 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 13:59:20.210673 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:08:37.638588 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:15:41.134187 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:25:09.492284 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0803 14:34:56.850466 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
==> kube-proxy [61c7576e859d] <==
I0810 21:42:45.386567 1 server_others.go:140] Detected node IP 192.168.49.2
W0810 21:42:45.386618 1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
I0810 21:42:45.478849 1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0810 21:42:45.478964 1 server_others.go:212] Using iptables Proxier.
I0810 21:42:45.479005 1 server_others.go:219] creating dualStackProxier for iptables.
W0810 21:42:45.479110 1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
I0810 21:42:45.481806 1 server.go:643] Version: v1.21.2
I0810 21:42:45.482881 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0810 21:42:45.482947 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0810 21:42:45.485395 1 config.go:315] Starting service config controller
I0810 21:42:45.485532 1 config.go:224] Starting endpoint slice config controller
I0810 21:42:45.486411 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0810 21:42:45.487661 1 shared_informer.go:240] Waiting for caches to sync for service config
W0810 21:42:45.493790 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 21:42:45.495864 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0810 21:42:45.586678 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0810 21:42:45.589483 1 shared_informer.go:247] Caches are synced for service config
W0810 21:51:12.921716 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 21:57:52.478213 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:06:42.868744 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:16:28.226326 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:21:53.856143 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:31:15.207292 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:40:17.593883 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0810 22:46:36.151686 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:00:31.982933 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:06:49.526468 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:15:50.899308 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:23:17.378600 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:29:29.973532 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:35:17.556637 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:42:40.035328 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:51:37.444282 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 13:59:23.887708 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:08:16.358887 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:18:09.678830 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:24:51.238894 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:34:46.497059 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:43:10.903188 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:49:01.519927 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 14:56:56.960182 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:05:28.366078 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:13:09.877979 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:21:14.326779 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:27:48.879595 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:37:00.307072 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:43:24.863004 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:52:25.247678 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 15:58:06.838890 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 16:07:40.187659 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 16:15:21.742323 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 16:23:42.168968 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:07:26.895878 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:14:59.372227 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:21:19.920763 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:30:50.307487 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:37:50.906747 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:44:14.500347 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:53:20.966063 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0811 17:59:49.520700 1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
==> kube-scheduler [444f9d5def83] <==
I0810 21:42:32.192914 1 serving.go:347] Generated self-signed cert in-memory
W0810 21:42:37.751137 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0810 21:42:37.751281 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0810 21:42:37.751309 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0810 21:42:37.751328 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0810 21:42:37.969364 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0810 21:42:37.972674 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0810 21:42:37.972918 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0810 21:42:37.973050 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0810 21:42:38.174116 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [b5f048955e8f] <==
I0726 18:33:19.432183 1 serving.go:347] Generated self-signed cert in-memory
W0726 18:33:23.653172 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0726 18:33:23.653517 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0726 18:33:23.653734 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
W0726 18:33:23.653918 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0726 18:33:23.840215 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0726 18:33:23.840588 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0726 18:33:23.844310 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0726 18:33:23.844418 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0726 18:33:23.848591 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:23.849351 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:23.859711 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0726 18:33:23.860233 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0726 18:33:23.860252 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0726 18:33:23.861194 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0726 18:33:23.861350 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0726 18:33:23.861471 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0726 18:33:23.861680 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0726 18:33:23.861829 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0726 18:33:23.861967 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0726 18:33:23.862162 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:23.928284 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0726 18:33:23.929198 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:24.668416 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:24.739675 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0726 18:33:24.757037 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0726 18:33:24.832570 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0726 18:33:24.839722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0726 18:33:24.852919 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0726 18:33:24.944301 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0726 18:33:25.043969 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0726 18:33:25.055946 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:25.072559 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0726 18:33:25.096445 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0726 18:33:28.042116 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
-- Logs begin at Tue 2021-08-10 21:42:05 UTC, end at Wed 2021-08-11 18:00:35 UTC. --
Aug 11 17:54:10 minikube kubelet[1170]: E0811 17:54:10.820065 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 1m20s restarting failed container=meetings pod=meetings-5fd8b9d97f-fx7f5_default(681ed0d3-7255-459a-b96c-36e019b9e9f5)"" pod="default/meetings-5fd8b9d97f-fx7f5" podUID=681ed0d3-7255-459a-b96c-36e019b9e9f5
Aug 11 17:54:11 minikube kubelet[1170]: I0811 17:54:11.832071 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5fd8b9d97f-fx7f5 through plugin: invalid network status for"
Aug 11 17:54:25 minikube kubelet[1170]: I0811 17:54:25.674739 1170 scope.go:111] "RemoveContainer" containerID="5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f"
Aug 11 17:54:25 minikube kubelet[1170]: E0811 17:54:25.675399 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 1m20s restarting failed container=meetings pod=meetings-5fd8b9d97f-fx7f5_default(681ed0d3-7255-459a-b96c-36e019b9e9f5)"" pod="default/meetings-5fd8b9d97f-fx7f5" podUID=681ed0d3-7255-459a-b96c-36e019b9e9f5
Aug 11 17:54:37 minikube kubelet[1170]: I0811 17:54:37.674430 1170 scope.go:111] "RemoveContainer" containerID="5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f"
Aug 11 17:54:37 minikube kubelet[1170]: E0811 17:54:37.675118 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 1m20s restarting failed container=meetings pod=meetings-5fd8b9d97f-fx7f5_default(681ed0d3-7255-459a-b96c-36e019b9e9f5)"" pod="default/meetings-5fd8b9d97f-fx7f5" podUID=681ed0d3-7255-459a-b96c-36e019b9e9f5
Aug 11 17:54:41 minikube kubelet[1170]: I0811 17:54:41.236671 1170 scope.go:111] "RemoveContainer" containerID="5041e4aacbfa3357ee83b14b758f1191683e67b491f9b298268a0cce2ec0513f"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.327480 1170 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-smt7h" (UniqueName: "kubernetes.io/projected/681ed0d3-7255-459a-b96c-36e019b9e9f5-kube-api-access-smt7h") pod "681ed0d3-7255-459a-b96c-36e019b9e9f5" (UID: "681ed0d3-7255-459a-b96c-36e019b9e9f5") "
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.330346 1170 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681ed0d3-7255-459a-b96c-36e019b9e9f5-kube-api-access-smt7h" (OuterVolumeSpecName: "kube-api-access-smt7h") pod "681ed0d3-7255-459a-b96c-36e019b9e9f5" (UID: "681ed0d3-7255-459a-b96c-36e019b9e9f5"). InnerVolumeSpecName "kube-api-access-smt7h". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.428586 1170 reconciler.go:319] "Volume detached for volume "kube-api-access-smt7h" (UniqueName: "kubernetes.io/projected/681ed0d3-7255-459a-b96c-36e019b9e9f5-kube-api-access-smt7h") on node "minikube" DevicePath """
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.474442 1170 topology_manager.go:187] "Topology Admit Handler"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.508782 1170 topology_manager.go:187] "Topology Admit Handler"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.618455 1170 topology_manager.go:187] "Topology Admit Handler"
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.629791 1170 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-h47jg" (UniqueName: "kubernetes.io/projected/9857b4aa-4c34-4e33-815d-c7f45595c8ce-kube-api-access-h47jg") pod "dev-login-5f7c84dcff-qbqcm" (UID: "9857b4aa-4c34-4e33-815d-c7f45595c8ce") "
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.629894 1170 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-c85cc" (UniqueName: "kubernetes.io/projected/8a7c8f8b-895b-4ecb-8dd1-a6148a36c111-kube-api-access-c85cc") pod "hydra-6f6468fb4c-sz54x" (UID: "8a7c8f8b-895b-4ecb-8dd1-a6148a36c111") "
Aug 11 17:54:42 minikube kubelet[1170]: I0811 17:54:42.730678 1170 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume "kube-api-access-99kfq" (UniqueName: "kubernetes.io/projected/96449dce-9e25-44ee-87af-a1d3b7777aa1-kube-api-access-99kfq") pod "meetings-5db87dbdff-wl5pm" (UID: "96449dce-9e25-44ee-87af-a1d3b7777aa1") "
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.728409 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hydra-6f6468fb4c-sz54x through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.736198 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.880735 1170 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c240a97801b37eebe0911ec9775abb5e4eeabca16c3419db371dbd01d0a5fb13"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.885276 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.888797 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hydra-6f6468fb4c-sz54x through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.895827 1170 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3ed52d81849b4c5dedcfe627dc78e904619a1e68fa487fd5fc659c89d8d69c98"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.899481 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:43 minikube kubelet[1170]: I0811 17:54:43.906533 1170 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7b6c68238d2cd825c10eea4ca6c331dec1f767129b7d4bf53a9f3b44e9f66a21"
Aug 11 17:54:44 minikube kubelet[1170]: I0811 17:54:44.926337 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:44 minikube kubelet[1170]: I0811 17:54:44.932698 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:44 minikube kubelet[1170]: I0811 17:54:44.938697 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/hydra-6f6468fb4c-sz54x through plugin: invalid network status for"
Aug 11 17:54:45 minikube kubelet[1170]: I0811 17:54:45.957855 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dev-login-5f7c84dcff-qbqcm through plugin: invalid network status for"
Aug 11 17:54:46 minikube kubelet[1170]: I0811 17:54:46.977308 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:46 minikube kubelet[1170]: I0811 17:54:46.984389 1170 scope.go:111] "RemoveContainer" containerID="4a35c1b85b8d21b25ea5e735624a426e3ef672188e3d967539ee4cdf6a527748"
Aug 11 17:54:48 minikube kubelet[1170]: I0811 17:54:48.002869 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:48 minikube kubelet[1170]: I0811 17:54:48.014080 1170 scope.go:111] "RemoveContainer" containerID="4a35c1b85b8d21b25ea5e735624a426e3ef672188e3d967539ee4cdf6a527748"
Aug 11 17:54:48 minikube kubelet[1170]: I0811 17:54:48.014589 1170 scope.go:111] "RemoveContainer" containerID="16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4"
Aug 11 17:54:48 minikube kubelet[1170]: E0811 17:54:48.015100 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 10s restarting failed container=meetings pod=meetings-5db87dbdff-wl5pm_default(96449dce-9e25-44ee-87af-a1d3b7777aa1)"" pod="default/meetings-5db87dbdff-wl5pm" podUID=96449dce-9e25-44ee-87af-a1d3b7777aa1
Aug 11 17:54:49 minikube kubelet[1170]: I0811 17:54:49.039202 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:54:49 minikube kubelet[1170]: I0811 17:54:49.047816 1170 scope.go:111] "RemoveContainer" containerID="16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4"
Aug 11 17:54:49 minikube kubelet[1170]: E0811 17:54:49.048277 1170 pod_workers.go:190] "Error syncing pod, skipping" err="failed to "StartContainer" for "meetings" with CrashLoopBackOff: "back-off 10s restarting failed container=meetings pod=meetings-5db87dbdff-wl5pm_default(96449dce-9e25-44ee-87af-a1d3b7777aa1)"" pod="default/meetings-5db87dbdff-wl5pm" podUID=96449dce-9e25-44ee-87af-a1d3b7777aa1
Aug 11 17:54:58 minikube kubelet[1170]: E0811 17:54:58.264860 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:02 minikube kubelet[1170]: I0811 17:55:02.639812 1170 scope.go:111] "RemoveContainer" containerID="16bc262ad1016fad926efeb337069f7148d4918a189413d7a31466d6b6bea1b4"
Aug 11 17:55:04 minikube kubelet[1170]: I0811 17:55:04.246279 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:55:05 minikube kubelet[1170]: I0811 17:55:05.357729 1170 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/meetings-5db87dbdff-wl5pm through plugin: invalid network status for"
Aug 11 17:55:08 minikube kubelet[1170]: E0811 17:55:08.381548 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:10 minikube kubelet[1170]: I0811 17:55:10.567805 1170 scope.go:111] "RemoveContainer" containerID="0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e"
Aug 11 17:55:10 minikube kubelet[1170]: I0811 17:55:10.607403 1170 scope.go:111] "RemoveContainer" containerID="06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16"
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.783992 1170 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-w966b" (UniqueName: "kubernetes.io/projected/2ad4851c-8380-4a9f-937c-233630dc97d7-kube-api-access-w966b") pod "2ad4851c-8380-4a9f-937c-233630dc97d7" (UID: "2ad4851c-8380-4a9f-937c-233630dc97d7") "
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.784067 1170 reconciler.go:196] "operationExecutor.UnmountVolume started for volume "kube-api-access-pt74p" (UniqueName: "kubernetes.io/projected/37b7f155-4fd6-443b-a319-e707ebe9f2c1-kube-api-access-pt74p") pod "37b7f155-4fd6-443b-a319-e707ebe9f2c1" (UID: "37b7f155-4fd6-443b-a319-e707ebe9f2c1") "
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.786645 1170 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b7f155-4fd6-443b-a319-e707ebe9f2c1-kube-api-access-pt74p" (OuterVolumeSpecName: "kube-api-access-pt74p") pod "37b7f155-4fd6-443b-a319-e707ebe9f2c1" (UID: "37b7f155-4fd6-443b-a319-e707ebe9f2c1"). InnerVolumeSpecName "kube-api-access-pt74p". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.786827 1170 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad4851c-8380-4a9f-937c-233630dc97d7-kube-api-access-w966b" (OuterVolumeSpecName: "kube-api-access-w966b") pod "2ad4851c-8380-4a9f-937c-233630dc97d7" (UID: "2ad4851c-8380-4a9f-937c-233630dc97d7"). InnerVolumeSpecName "kube-api-access-w966b". PluginName "kubernetes.io/projected", VolumeGidValue ""
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.884673 1170 reconciler.go:319] "Volume detached for volume "kube-api-access-w966b" (UniqueName: "kubernetes.io/projected/2ad4851c-8380-4a9f-937c-233630dc97d7-kube-api-access-w966b") on node "minikube" DevicePath """
Aug 11 17:55:11 minikube kubelet[1170]: I0811 17:55:11.884733 1170 reconciler.go:319] "Volume detached for volume "kube-api-access-pt74p" (UniqueName: "kubernetes.io/projected/37b7f155-4fd6-443b-a319-e707ebe9f2c1-kube-api-access-pt74p") on node "minikube" DevicePath """
Aug 11 17:55:18 minikube kubelet[1170]: E0811 17:55:18.433073 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/37b7f155-4fd6-443b-a319-e707ebe9f2c1/etc-hosts with error exit status 1" pod="default/dev-login-d9c6b5dcb-ndz5v"
Aug 11 17:55:18 minikube kubelet[1170]: E0811 17:55:18.439660 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:27 minikube kubelet[1170]: I0811 17:55:27.334359 1170 log.go:184] http: superfluous response.WriteHeader call from k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader (httplog.go:217)
Aug 11 17:55:28 minikube kubelet[1170]: E0811 17:55:28.516062 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/681ed0d3-7255-459a-b96c-36e019b9e9f5/etc-hosts with error exit status 1" pod="default/meetings-5fd8b9d97f-fx7f5"
Aug 11 17:55:28 minikube kubelet[1170]: E0811 17:55:28.524908 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/37b7f155-4fd6-443b-a319-e707ebe9f2c1/etc-hosts with error exit status 1" pod="default/dev-login-d9c6b5dcb-ndz5v"
Aug 11 17:55:28 minikube kubelet[1170]: E0811 17:55:28.541800 1170 cadvisor_stats_provider.go:151] "Unable to fetch pod etc hosts stats" err="failed to get stats failed command 'du' ($ nice -n 19 du -x -s -B 1) on path /var/lib/kubelet/pods/2ad4851c-8380-4a9f-937c-233630dc97d7/etc-hosts with error exit status 1" pod="default/hydra-6699d9db88-dlw7p"
Aug 11 17:55:30 minikube kubelet[1170]: W0811 17:55:30.140819 1170 sysinfo.go:203] Nodes topology is not available, providing CPU topology
Aug 11 17:55:30 minikube kubelet[1170]: E0811 17:55:30.168289 1170 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/3b987718669a210c1a55f59cfb33139450941a2fe5c60f4692b5dd056683eb96/diff" to get inode usage: stat /var/lib/docker/overlay2/3b987718669a210c1a55f59cfb33139450941a2fe5c60f4692b5dd056683eb96/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e" to get inode usage: stat /var/lib/docker/containers/0b414234afca0bee6f168e4f5bc3eaa18aac96641616a6edd8f554e626b5032e: no such file or directory
Aug 11 17:55:30 minikube kubelet[1170]: E0811 17:55:30.169852 1170 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/a892d6190a17c53d3933f049fe9a26b2f26558cac808021451cc01b21cb3cb21/diff" to get inode usage: stat /var/lib/docker/overlay2/a892d6190a17c53d3933f049fe9a26b2f26558cac808021451cc01b21cb3cb21/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16" to get inode usage: stat /var/lib/docker/containers/06e05b27eb6da432c0dd86ac13c1167c27a5cf37221fb0cc4c55530c18566b16: no such file or directory
Aug 11 18:00:29 minikube kubelet[1170]: W0811 18:00:29.795827 1170 sysinfo.go:203] Nodes topology is not available, providing CPU topology
==> storage-provisioner [01a3799e1aff] <==
I0810 21:42:43.758848 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0810 21:43:13.739450 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [378df0f3ce20] <==
I0810 21:43:29.716566 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0810 21:43:29.736669 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0810 21:43:29.737324 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0810 21:43:47.158326 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0810 21:43:47.158611 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"628ecb3e-d393-433d-bdfc-4fdae717f1ec", APIVersion:"v1", ResourceVersion:"61529", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_342a4e4f-8fdc-4345-a8f7-aed1c8f8fa0a became leader
I0810 21:43:47.158792 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_342a4e4f-8fdc-4345-a8f7-aed1c8f8fa0a!
I0810 21:43:47.260888 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_342a4e4f-8fdc-4345-a8f7-aed1c8f8fa0a!
Full output of failed command:
The text was updated successfully, but these errors were encountered: