* * ==> Audit <== * |------------|----------------------------|----------|---------------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |------------|----------------------------|----------|---------------|---------|---------------------|---------------------| | start | | minikube | ashleylionell | v1.28.0 | 12 Jan 23 04:45 IST | | | start | | minikube | ashleylionell | v1.28.0 | 12 Jan 23 04:54 IST | | | start | | minikube | ashleylionell | v1.28.0 | 12 Jan 23 05:10 IST | 12 Jan 23 05:11 IST | | service | test-nginx-svc --url | minikube | ashleylionell | v1.28.0 | 12 Jan 23 18:39 IST | 12 Jan 23 18:47 IST | | service | service/grid-hub-svc --url | minikube | ashleylionell | v1.28.0 | 12 Jan 23 19:25 IST | | | service | grid-hub-svc --url | minikube | ashleylionell | v1.28.0 | 12 Jan 23 19:25 IST | 12 Jan 23 19:28 IST | | service | grid-hub-svc --url | minikube | ashleylionell | v1.28.0 | 12 Jan 23 19:28 IST | 12 Jan 23 19:28 IST | | service | grid-hub-svc --url | minikube | ashleylionell | v1.28.0 | 12 Jan 23 19:29 IST | 12 Jan 23 23:38 IST | | service | grid-hub-svc --url | minikube | ashleylionell | v1.28.0 | 12 Jan 23 23:38 IST | 13 Jan 23 06:27 IST | | addons | list | minikube | ashleylionell | v1.28.0 | 13 Jan 23 00:33 IST | 13 Jan 23 00:33 IST | | addons | enable ingress | minikube | ashleylionell | v1.28.0 | 13 Jan 23 00:41 IST | 13 Jan 23 00:41 IST | | addons | enable ingress-dns | minikube | ashleylionell | v1.28.0 | 13 Jan 23 00:42 IST | 13 Jan 23 00:42 IST | | addons | list | minikube | ashleylionell | v1.28.0 | 13 Jan 23 00:42 IST | 13 Jan 23 00:42 IST | | tunnel | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 00:46 IST | 13 Jan 23 00:46 IST | | ip | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 00:48 IST | 13 Jan 23 00:48 IST | | delete | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:22 IST | 13 Jan 23 06:22 IST | | start | driver=docker | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:23 IST | 13 Jan 23 06:23 IST | | addons | list | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:25 IST | 13 Jan 23 06:25 IST | | addons | enable ingress | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:25 IST | 13 Jan 23 06:26 IST | | addons | enable ingress-dns | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:26 IST | 13 Jan 23 06:26 IST | | ip | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:26 IST | 13 Jan 23 06:26 IST | | delete | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:38 IST | 13 Jan 23 06:38 IST | | start | --driver=hyperkit | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:39 IST | 13 Jan 23 06:40 IST | | ip | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:41 IST | 13 Jan 23 06:41 IST | | service | service/grid-svc --url | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:42 IST | | | service | grid-svc --url | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:42 IST | 13 Jan 23 06:42 IST | | addons | enable ingress | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:44 IST | 13 Jan 23 06:45 IST | | addons | enable ingress-dns | minikube | ashleylionell | v1.28.0 | 13 Jan 23 06:45 IST | 13 Jan 23 06:45 IST | | service | grid-svc --url | minikube | ashleylionell | v1.28.0 | 13 Jan 23 20:19 IST | 13 Jan 23 20:19 IST | | service | node-svc --url | minikube | ashleylionell | v1.28.0 | 13 Jan 23 21:53 IST | 13 Jan 23 21:53 IST | | service | node-svc --url | minikube | ashleylionell | v1.28.0 | 13 Jan 23 22:04 IST | 13 Jan 23 22:04 IST | | ip | | minikube | ashleylionell | v1.28.0 | 13 Jan 23 23:54 IST | 13 Jan 23 23:54 IST | | docker-env | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:27 IST | 14 Jan 23 05:27 IST | | addons | enable registry | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:32 IST | 14 Jan 23 05:32 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:32 IST | 14 Jan 23 05:32 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:35 IST | 14 Jan 23 05:35 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:36 IST | 14 Jan 23 05:36 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:40 IST | 14 Jan 23 05:40 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:40 IST | 14 Jan 23 05:40 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:44 IST | 14 Jan 23 05:44 IST | | ip | | minikube | ashleylionell | v1.28.0 | 14 Jan 23 05:44 IST | 14 Jan 23 05:44 IST | |------------|----------------------------|----------|---------------|---------|---------------------|---------------------| * * ==> Last Start <== * Log file created at: 2023/01/13 06:39:16 Running on machine: alionell Binary: Built with gc go1.19.3 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0113 06:39:16.643339 81479 out.go:296] Setting OutFile to fd 1 ... I0113 06:39:16.643894 81479 out.go:348] isatty.IsTerminal(1) = true I0113 06:39:16.643898 81479 out.go:309] Setting ErrFile to fd 2... I0113 06:39:16.643904 81479 out.go:348] isatty.IsTerminal(2) = true I0113 06:39:16.644378 81479 root.go:334] Updating PATH: /Users/ashleylionell/.minikube/bin I0113 06:39:16.648341 81479 out.go:303] Setting JSON to false I0113 06:39:16.680042 81479 start.go:116] hostinfo: {"hostname":"alionell","uptime":91976,"bootTime":1673480180,"procs":628,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.3.1","kernelVersion":"20.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"4ebfa0e7-de0b-59dc-ba4e-3427950c40c3"} W0113 06:39:16.680175 81479 start.go:124] gopshost.Virtualization returned error: not implemented yet I0113 06:39:16.700743 81479 out.go:177] ๐Ÿ˜„ minikube v1.28.0 on Darwin 11.3.1 I0113 06:39:16.719983 81479 notify.go:220] Checking for updates... I0113 06:39:16.720654 81479 driver.go:365] Setting default libvirt URI to qemu:///system I0113 06:39:16.758905 81479 out.go:177] โœจ Using the hyperkit driver based on user configuration I0113 06:39:16.767455 81479 start.go:282] selected driver: hyperkit I0113 06:39:16.767847 81479 start.go:808] validating driver "hyperkit" against I0113 06:39:16.767888 81479 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0113 06:39:16.768006 81479 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0113 06:39:16.770001 81479 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/ashleylionell/.minikube/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin W0113 06:39:16.770137 81479 install.go:62] docker-machine-driver-hyperkit: exec: "docker-machine-driver-hyperkit": executable file not found in $PATH I0113 06:39:16.786223 81479 out.go:177] ๐Ÿ’พ Downloading driver docker-machine-driver-hyperkit: I0113 06:39:16.816481 81479 download.go:101] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64.sha256 -> /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:39:17.394969 81479 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x103482e20 0x103482e20 0x103482e20 0x103482e20 0x103482e20 0x103482e20 0x103482e20] Decompressors:map[bz2:0x103482e20 gz:0x103482e20 tar:0x103482e20 tar.bz2:0x103482e20 tar.gz:0x103482e20 tar.xz:0x103482e20 tar.zst:0x103482e20 tbz2:0x103482e20 tgz:0x103482e20 txz:0x103482e20 tzst:0x103482e20 xz:0x103482e20 zip:0x103482e20 zst:0x103482e20] Getters:map[file:0xc0004f8100 http:0xc000f8a050 https:0xc000f8a0a0] Dir:false ProgressListener:0x10343e880 Insecure:false DisableSymlinks:false Options:[0x101677f40]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version I0113 06:39:17.395142 81479 download.go:101] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.28.0/docker-machine-driver-hyperkit.sha256 -> /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:39:22.913346 81479 install.go:79] stdout: I0113 06:39:22.934250 81479 out.go:177] ๐Ÿ”‘ The 'hyperkit' driver requires elevated permissions. The following commands will be executed: $ sudo chown root:wheel /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit $ sudo chmod u+s /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:39:22.953735 81479 install.go:99] testing: [sudo -n chown root:wheel /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit] I0113 06:39:23.033548 81479 install.go:101] [sudo chown root:wheel /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit] may require a password: exit status 1 I0113 06:39:23.033583 81479 install.go:106] running: [sudo chown root:wheel /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit] I0113 06:39:35.658219 81479 install.go:99] testing: [sudo -n chmod u+s /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit] I0113 06:39:35.687067 81479 install.go:106] running: [sudo chmod u+s /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit] I0113 06:39:35.713980 81479 start_flags.go:303] no existing cluster config was found, will generate one from the flags I0113 06:39:35.714653 81479 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB I0113 06:39:35.714758 81479 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true] I0113 06:39:35.714778 81479 cni.go:95] Creating CNI manager for "" I0113 06:39:35.714786 81479 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0113 06:39:35.714792 81479 start_flags.go:317] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0113 06:39:35.715055 81479 iso.go:124] acquiring lock: {Name:mk8adbb138efacc09b5fa996fdc0a51dd2cd21eb Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0113 06:39:35.735400 81479 out.go:177] ๐Ÿ’ฟ Downloading VM boot image ... I0113 06:39:35.771542 81479 download.go:101] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso.sha256 -> /Users/ashleylionell/.minikube/cache/iso/amd64/minikube-v1.28.0-amd64.iso I0113 06:39:55.874506 81479 out.go:177] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0113 06:39:55.930555 81479 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0113 06:39:55.930663 81479 preload.go:148] Found local preload: /Users/ashleylionell/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 I0113 06:39:55.933270 81479 cache.go:57] Caching tarball of preloaded images I0113 06:39:55.934370 81479 preload.go:174] Found /Users/ashleylionell/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0113 06:39:55.934395 81479 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker I0113 06:39:55.937512 81479 profile.go:148] Saving config to /Users/ashleylionell/.minikube/profiles/minikube/config.json ... I0113 06:39:55.937569 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/config.json: {Name:mkf91d47410ef314a232f151dcf2ef7dbb32622b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:39:55.939014 81479 cache.go:208] Successfully downloaded all kic artifacts I0113 06:39:55.941553 81479 start.go:364] acquiring machines lock for minikube: {Name:mk3215dd8b39896b9e11db27eef50f4e6c9f9931 Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0113 06:39:55.941791 81479 start.go:368] acquired machines lock for "minikube" in 215.377ยตs I0113 06:39:55.942472 81479 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0113 06:39:55.942547 81479 start.go:125] createHost starting for "" (driver="hyperkit") I0113 06:39:55.961731 81479 out.go:204] ๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ... I0113 06:39:55.965856 81479 main.go:134] libmachine: Found binary path at /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:39:55.968478 81479 main.go:134] libmachine: Launching plugin server for driver hyperkit I0113 06:39:56.601108 81479 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61140 I0113 06:39:56.603569 81479 main.go:134] libmachine: () Calling .GetVersion I0113 06:39:56.604975 81479 main.go:134] libmachine: Using API Version 1 I0113 06:39:56.604988 81479 main.go:134] libmachine: () Calling .SetConfigRaw I0113 06:39:56.605271 81479 main.go:134] libmachine: () Calling .GetMachineName I0113 06:39:56.605369 81479 main.go:134] libmachine: (minikube) Calling .GetMachineName I0113 06:39:56.605469 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:39:56.605883 81479 start.go:159] libmachine.API.Create for "minikube" (driver="hyperkit") I0113 06:39:56.605908 81479 client.go:168] LocalClient.Create starting I0113 06:39:56.606260 81479 main.go:134] libmachine: Reading certificate data from /Users/ashleylionell/.minikube/certs/ca.pem I0113 06:39:56.606823 81479 main.go:134] libmachine: Decoding PEM data... I0113 06:39:56.607197 81479 main.go:134] libmachine: Parsing certificate... I0113 06:39:56.607878 81479 main.go:134] libmachine: Reading certificate data from /Users/ashleylionell/.minikube/certs/cert.pem I0113 06:39:56.608141 81479 main.go:134] libmachine: Decoding PEM data... I0113 06:39:56.608149 81479 main.go:134] libmachine: Parsing certificate... I0113 06:39:56.608169 81479 main.go:134] libmachine: Running pre-create checks... I0113 06:39:56.608175 81479 main.go:134] libmachine: (minikube) Calling .PreCreateCheck I0113 06:39:56.608272 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:39:56.608761 81479 main.go:134] libmachine: (minikube) Calling .GetConfigRaw I0113 06:39:56.609540 81479 main.go:134] libmachine: Creating machine... I0113 06:39:56.609549 81479 main.go:134] libmachine: (minikube) Calling .Create I0113 06:39:56.609615 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:39:56.609730 81479 main.go:134] libmachine: (minikube) DBG | I0113 06:39:56.609607 81545 common.go:116] Making disk image using store path: /Users/ashleylionell/.minikube I0113 06:39:56.609833 81479 main.go:134] libmachine: (minikube) Downloading /Users/ashleylionell/.minikube/cache/boot2docker.iso from file:///Users/ashleylionell/.minikube/cache/iso/amd64/minikube-v1.28.0-amd64.iso... I0113 06:39:56.975070 81479 main.go:134] libmachine: (minikube) DBG | I0113 06:39:56.974986 81545 common.go:123] Creating ssh key: /Users/ashleylionell/.minikube/machines/minikube/id_rsa... I0113 06:39:57.032947 81479 main.go:134] libmachine: (minikube) DBG | I0113 06:39:57.032876 81545 common.go:129] Creating raw disk image: /Users/ashleylionell/.minikube/machines/minikube/minikube.rawdisk... I0113 06:39:57.032960 81479 main.go:134] libmachine: (minikube) DBG | Writing magic tar header I0113 06:39:57.032969 81479 main.go:134] libmachine: (minikube) DBG | Writing SSH key tar header I0113 06:39:57.033243 81479 main.go:134] libmachine: (minikube) DBG | I0113 06:39:57.033211 81545 common.go:143] Fixing permissions on /Users/ashleylionell/.minikube/machines/minikube ... I0113 06:39:57.380748 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:39:57.380794 81479 main.go:134] libmachine: (minikube) DBG | clean start, hyperkit pid file doesn't exist: /Users/ashleylionell/.minikube/machines/minikube/hyperkit.pid I0113 06:39:57.380859 81479 main.go:134] libmachine: (minikube) DBG | Using UUID fdf7b5f8-92de-11ed-9b0a-acde48001122 I0113 06:39:57.937801 81479 main.go:134] libmachine: (minikube) DBG | Generated MAC b6:b1:b4:e0:55:e3 I0113 06:39:57.937827 81479 main.go:134] libmachine: (minikube) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube I0113 06:39:57.937869 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/ashleylionell/.minikube/machines/minikube", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fdf7b5f8-92de-11ed-9b0a-acde48001122", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002022d0)}, ISOImages:[]string{"/Users/ashleylionell/.minikube/machines/minikube/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/ashleylionell/.minikube/machines/minikube/bzimage", Initrd:"/Users/ashleylionell/.minikube/machines/minikube/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)} I0113 06:39:57.937900 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/ashleylionell/.minikube/machines/minikube", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fdf7b5f8-92de-11ed-9b0a-acde48001122", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002022d0)}, ISOImages:[]string{"/Users/ashleylionell/.minikube/machines/minikube/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/ashleylionell/.minikube/machines/minikube/bzimage", Initrd:"/Users/ashleylionell/.minikube/machines/minikube/initrd", Bootrom:"", CPUs:2, Memory:4000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)} I0113 06:39:57.938030 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/ashleylionell/.minikube/machines/minikube/hyperkit.pid", "-c", "2", "-m", "4000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fdf7b5f8-92de-11ed-9b0a-acde48001122", "-s", "2:0,virtio-blk,/Users/ashleylionell/.minikube/machines/minikube/minikube.rawdisk", "-s", "3,ahci-cd,/Users/ashleylionell/.minikube/machines/minikube/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/ashleylionell/.minikube/machines/minikube/tty,log=/Users/ashleylionell/.minikube/machines/minikube/console-ring", "-f", "kexec,/Users/ashleylionell/.minikube/machines/minikube/bzimage,/Users/ashleylionell/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube"} I0113 06:39:57.938074 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/ashleylionell/.minikube/machines/minikube/hyperkit.pid -c 2 -m 4000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fdf7b5f8-92de-11ed-9b0a-acde48001122 -s 2:0,virtio-blk,/Users/ashleylionell/.minikube/machines/minikube/minikube.rawdisk -s 3,ahci-cd,/Users/ashleylionell/.minikube/machines/minikube/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/ashleylionell/.minikube/machines/minikube/tty,log=/Users/ashleylionell/.minikube/machines/minikube/console-ring -f kexec,/Users/ashleylionell/.minikube/machines/minikube/bzimage,/Users/ashleylionell/.minikube/machines/minikube/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube" I0113 06:39:57.938090 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger I0113 06:39:57.939782 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 DEBUG: hyperkit: Pid is 81552 I0113 06:39:57.940236 81479 main.go:134] libmachine: (minikube) DBG | Attempt 0 I0113 06:39:57.940255 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:39:57.940440 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:39:57.944433 81479 main.go:134] libmachine: (minikube) DBG | Searching for b6:b1:b4:e0:55:e3 in /var/db/dhcpd_leases ... I0113 06:39:57.948362 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications I0113 06:39:58.010471 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: /Users/ashleylionell/.minikube/machines/minikube/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD I0113 06:39:58.011650 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0113 06:39:58.011669 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0113 06:39:58.011682 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0113 06:39:58.552914 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0 I0113 06:39:58.552924 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0 I0113 06:39:58.657495 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0 I0113 06:39:58.657508 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0 I0113 06:39:58.657535 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0 I0113 06:39:58.658235 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1 I0113 06:39:58.658243 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:39:58 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1 I0113 06:39:59.946278 81479 main.go:134] libmachine: (minikube) DBG | Attempt 1 I0113 06:39:59.946291 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:39:59.946503 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:39:59.948322 81479 main.go:134] libmachine: (minikube) DBG | Searching for b6:b1:b4:e0:55:e3 in /var/db/dhcpd_leases ... I0113 06:40:01.949275 81479 main.go:134] libmachine: (minikube) DBG | Attempt 2 I0113 06:40:01.949291 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:01.949492 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:01.951600 81479 main.go:134] libmachine: (minikube) DBG | Searching for b6:b1:b4:e0:55:e3 in /var/db/dhcpd_leases ... I0113 06:40:03.352012 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:40:03 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0 I0113 06:40:03.352026 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:40:03 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0 I0113 06:40:03.352034 81479 main.go:134] libmachine: (minikube) DBG | 2023/01/13 06:40:03 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0 I0113 06:40:03.952598 81479 main.go:134] libmachine: (minikube) DBG | Attempt 3 I0113 06:40:03.952613 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:03.952711 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:03.954580 81479 main.go:134] libmachine: (minikube) DBG | Searching for b6:b1:b4:e0:55:e3 in /var/db/dhcpd_leases ... I0113 06:40:05.958663 81479 main.go:134] libmachine: (minikube) DBG | Attempt 4 I0113 06:40:05.958674 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:05.958777 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:05.961239 81479 main.go:134] libmachine: (minikube) DBG | Searching for b6:b1:b4:e0:55:e3 in /var/db/dhcpd_leases ... I0113 06:40:07.965113 81479 main.go:134] libmachine: (minikube) DBG | Attempt 5 I0113 06:40:07.965136 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:07.965328 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:07.966865 81479 main.go:134] libmachine: (minikube) DBG | Searching for b6:b1:b4:e0:55:e3 in /var/db/dhcpd_leases ... I0113 06:40:07.966893 81479 main.go:134] libmachine: (minikube) DBG | Found 1 entries in /var/db/dhcpd_leases! I0113 06:40:07.966910 81479 main.go:134] libmachine: (minikube) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:b6:b1:b4:e0:55:e3 ID:1,b6:b1:b4:e0:55:e3 Lease:0x63c200ee} I0113 06:40:07.966919 81479 main.go:134] libmachine: (minikube) DBG | Found match: b6:b1:b4:e0:55:e3 I0113 06:40:07.966929 81479 main.go:134] libmachine: (minikube) DBG | IP: 192.168.64.2 I0113 06:40:07.967128 81479 main.go:134] libmachine: (minikube) Calling .GetConfigRaw I0113 06:40:07.968305 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:07.968537 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:07.968735 81479 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes... I0113 06:40:07.968745 81479 main.go:134] libmachine: (minikube) Calling .GetState I0113 06:40:07.968911 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:07.969010 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:07.971694 81479 main.go:134] libmachine: Detecting operating system of created instance... I0113 06:40:07.972330 81479 main.go:134] libmachine: Waiting for SSH to be available... I0113 06:40:07.972709 81479 main.go:134] libmachine: Getting to WaitForSSH function... I0113 06:40:07.972717 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:07.972911 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:07.973059 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:07.973215 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:07.973344 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:07.973938 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:07.976808 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:07.976816 81479 main.go:134] libmachine: About to run SSH command: exit 0 I0113 06:40:08.060898 81479 main.go:134] libmachine: SSH cmd err, output: : I0113 06:40:08.060911 81479 main.go:134] libmachine: Detecting the provisioner... I0113 06:40:08.060932 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.061123 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.061238 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.061357 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.061491 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.061707 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:08.061876 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:08.061883 81479 main.go:134] libmachine: About to run SSH command: cat /etc/os-release I0113 06:40:08.141157 81479 main.go:134] libmachine: SSH cmd err, output: : NAME=Buildroot VERSION=2021.02.12-1-gb347f1c-dirty ID=buildroot VERSION_ID=2021.02.12 PRETTY_NAME="Buildroot 2021.02.12" I0113 06:40:08.142348 81479 main.go:134] libmachine: found compatible host: buildroot I0113 06:40:08.142354 81479 main.go:134] libmachine: Provisioning with buildroot... I0113 06:40:08.142360 81479 main.go:134] libmachine: (minikube) Calling .GetMachineName I0113 06:40:08.143836 81479 buildroot.go:166] provisioning hostname "minikube" I0113 06:40:08.143849 81479 main.go:134] libmachine: (minikube) Calling .GetMachineName I0113 06:40:08.143997 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.144098 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.144184 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.144264 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.144359 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.144495 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:08.144626 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:08.144632 81479 main.go:134] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0113 06:40:08.229254 81479 main.go:134] libmachine: SSH cmd err, output: : minikube I0113 06:40:08.230020 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.230187 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.230281 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.230379 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.230457 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.230601 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:08.230739 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:08.230749 81479 main.go:134] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0113 06:40:08.307649 81479 main.go:134] libmachine: SSH cmd err, output: : I0113 06:40:08.308083 81479 buildroot.go:172] set auth options {CertDir:/Users/ashleylionell/.minikube CaCertPath:/Users/ashleylionell/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/ashleylionell/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/ashleylionell/.minikube/machines/server.pem ServerKeyPath:/Users/ashleylionell/.minikube/machines/server-key.pem ClientKeyPath:/Users/ashleylionell/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/ashleylionell/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/ashleylionell/.minikube} I0113 06:40:08.308099 81479 buildroot.go:174] setting up certificates I0113 06:40:08.308800 81479 provision.go:83] configureAuth start I0113 06:40:08.308819 81479 main.go:134] libmachine: (minikube) Calling .GetMachineName I0113 06:40:08.308994 81479 main.go:134] libmachine: (minikube) Calling .GetIP I0113 06:40:08.309091 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.309200 81479 provision.go:138] copyHostCerts I0113 06:40:08.311200 81479 exec_runner.go:144] found /Users/ashleylionell/.minikube/cert.pem, removing ... I0113 06:40:08.311604 81479 exec_runner.go:207] rm: /Users/ashleylionell/.minikube/cert.pem I0113 06:40:08.311877 81479 exec_runner.go:151] cp: /Users/ashleylionell/.minikube/certs/cert.pem --> /Users/ashleylionell/.minikube/cert.pem (1139 bytes) I0113 06:40:08.312583 81479 exec_runner.go:144] found /Users/ashleylionell/.minikube/key.pem, removing ... I0113 06:40:08.312594 81479 exec_runner.go:207] rm: /Users/ashleylionell/.minikube/key.pem I0113 06:40:08.312740 81479 exec_runner.go:151] cp: /Users/ashleylionell/.minikube/certs/key.pem --> /Users/ashleylionell/.minikube/key.pem (1675 bytes) I0113 06:40:08.313360 81479 exec_runner.go:144] found /Users/ashleylionell/.minikube/ca.pem, removing ... I0113 06:40:08.313364 81479 exec_runner.go:207] rm: /Users/ashleylionell/.minikube/ca.pem I0113 06:40:08.313461 81479 exec_runner.go:151] cp: /Users/ashleylionell/.minikube/certs/ca.pem --> /Users/ashleylionell/.minikube/ca.pem (1094 bytes) I0113 06:40:08.313778 81479 provision.go:112] generating server cert: /Users/ashleylionell/.minikube/machines/server.pem ca-key=/Users/ashleylionell/.minikube/certs/ca.pem private-key=/Users/ashleylionell/.minikube/certs/ca-key.pem org=ashleylionell.minikube san=[192.168.64.2 192.168.64.2 localhost 127.0.0.1 minikube minikube] I0113 06:40:08.375722 81479 provision.go:172] copyRemoteCerts I0113 06:40:08.376167 81479 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0113 06:40:08.376189 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.376376 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.376531 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.376674 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.376789 81479 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/ashleylionell/.minikube/machines/minikube/id_rsa Username:docker} I0113 06:40:08.419832 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1094 bytes) I0113 06:40:08.438135 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes) I0113 06:40:08.455336 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0113 06:40:08.472547 81479 provision.go:86] duration metric: configureAuth took 163.710159ms I0113 06:40:08.472608 81479 buildroot.go:189] setting minikube options for container-runtime I0113 06:40:08.474526 81479 config.go:180] Loaded profile config "minikube": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0113 06:40:08.474551 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:08.475027 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.475326 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.475581 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.475891 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.476126 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.476516 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:08.476716 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:08.476722 81479 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0113 06:40:08.553019 81479 main.go:134] libmachine: SSH cmd err, output: : tmpfs I0113 06:40:08.553026 81479 buildroot.go:70] root file system type: tmpfs I0113 06:40:08.553154 81479 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0113 06:40:08.553166 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.553323 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.553456 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.553578 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.553678 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.553840 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:08.553985 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:08.554038 81479 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0113 06:40:08.639436 81479 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0113 06:40:08.640271 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:08.640436 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:08.640545 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.640643 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:08.640730 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:08.640894 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:08.641036 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:08.641048 81479 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0113 06:40:09.143119 81479 main.go:134] libmachine: SSH cmd err, output: : diff: can't stat '/lib/systemd/system/docker.service': No such file or directory Created symlink /etc/systemd/system/multi-user.target.wants/docker.service โ†’ /usr/lib/systemd/system/docker.service. I0113 06:40:09.143135 81479 main.go:134] libmachine: Checking connection to Docker... I0113 06:40:09.143140 81479 main.go:134] libmachine: (minikube) Calling .GetURL I0113 06:40:09.143311 81479 main.go:134] libmachine: Docker is up and running! I0113 06:40:09.143322 81479 main.go:134] libmachine: Reticulating splines... I0113 06:40:09.143333 81479 client.go:171] LocalClient.Create took 12.538150891s I0113 06:40:09.143343 81479 start.go:167] duration metric: libmachine.API.Create for "minikube" took 12.538198489s I0113 06:40:09.143485 81479 start.go:300] post-start starting for "minikube" (driver="hyperkit") I0113 06:40:09.143880 81479 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0113 06:40:09.143898 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:09.144098 81479 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0113 06:40:09.144111 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:09.144234 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:09.144344 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:09.144440 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:09.144538 81479 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/ashleylionell/.minikube/machines/minikube/id_rsa Username:docker} I0113 06:40:09.193222 81479 ssh_runner.go:195] Run: cat /etc/os-release I0113 06:40:09.197602 81479 info.go:137] Remote host: Buildroot 2021.02.12 I0113 06:40:09.197967 81479 filesync.go:126] Scanning /Users/ashleylionell/.minikube/addons for local assets ... I0113 06:40:09.198125 81479 filesync.go:126] Scanning /Users/ashleylionell/.minikube/files for local assets ... I0113 06:40:09.198188 81479 start.go:303] post-start completed in 54.69972ms I0113 06:40:09.198210 81479 main.go:134] libmachine: (minikube) Calling .GetConfigRaw I0113 06:40:09.198875 81479 main.go:134] libmachine: (minikube) Calling .GetIP I0113 06:40:09.199072 81479 profile.go:148] Saving config to /Users/ashleylionell/.minikube/profiles/minikube/config.json ... I0113 06:40:09.199427 81479 start.go:128] duration metric: createHost completed in 13.257673029s I0113 06:40:09.199445 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:09.199559 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:09.199671 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:09.199766 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:09.199876 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:09.200030 81479 main.go:134] libmachine: Using SSH client type: native I0113 06:40:09.200173 81479 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1003ed6e0] 0x1003f0860 [] 0s} 192.168.64.2 22 } I0113 06:40:09.200179 81479 main.go:134] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0113 06:40:09.275867 81479 main.go:134] libmachine: SSH cmd err, output: : 1673572209.446242960 I0113 06:40:09.275885 81479 fix.go:207] guest clock: 1673572209.446242960 I0113 06:40:09.275905 81479 fix.go:220] Guest: 2023-01-13 06:40:09.44624296 +0530 IST Remote: 2023-01-13 06:40:09.199435 +0530 IST m=+52.807544463 (delta=246.80796ms) I0113 06:40:09.281211 81479 fix.go:191] guest clock delta is within tolerance: 246.80796ms I0113 06:40:09.281219 81479 start.go:83] releasing machines lock for "minikube", held for 13.34022164s I0113 06:40:09.283559 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:09.283782 81479 main.go:134] libmachine: (minikube) Calling .GetIP I0113 06:40:09.283936 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:09.284859 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:09.285384 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:09.285934 81479 ssh_runner.go:195] Run: systemctl --version I0113 06:40:09.285945 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:09.286056 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:09.286155 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:09.286243 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:09.286339 81479 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/ashleylionell/.minikube/machines/minikube/id_rsa Username:docker} I0113 06:40:09.286842 81479 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0113 06:40:09.287441 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:09.287548 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:09.287634 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:09.287718 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:09.287815 81479 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/ashleylionell/.minikube/machines/minikube/id_rsa Username:docker} I0113 06:40:09.328870 81479 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0113 06:40:09.329266 81479 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0113 06:40:09.383993 81479 docker.go:613] Got preloaded images: I0113 06:40:09.384007 81479 docker.go:619] registry.k8s.io/kube-apiserver:v1.25.3 wasn't preloaded I0113 06:40:09.384100 81479 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0113 06:40:09.393951 81479 ssh_runner.go:195] Run: which lz4 I0113 06:40:09.397474 81479 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4 I0113 06:40:09.400972 81479 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1 stdout: stderr: stat: cannot statx '/preloaded.tar.lz4': No such file or directory I0113 06:40:09.401004 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404166592 bytes) I0113 06:40:11.190891 81479 docker.go:577] Took 1.792992 seconds to copy over tarball I0113 06:40:11.191087 81479 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4 I0113 06:40:16.533832 81479 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (5.342815448s) I0113 06:40:16.533852 81479 ssh_runner.go:146] rm: /preloaded.tar.lz4 I0113 06:40:16.574308 81479 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0113 06:40:16.582279 81479 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes) I0113 06:40:16.596263 81479 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0113 06:40:16.685609 81479 ssh_runner.go:195] Run: sudo systemctl restart docker I0113 06:40:18.471630 81479 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.786024782s) I0113 06:40:18.472159 81479 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0113 06:40:18.487193 81479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0113 06:40:18.501496 81479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0113 06:40:18.512664 81479 ssh_runner.go:195] Run: sudo systemctl stop -f crio I0113 06:40:18.538922 81479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0113 06:40:18.549997 81479 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0113 06:40:18.566611 81479 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0113 06:40:18.667177 81479 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0113 06:40:18.802697 81479 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0113 06:40:18.931258 81479 ssh_runner.go:195] Run: sudo systemctl restart docker I0113 06:40:20.187749 81479 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.25648453s) I0113 06:40:20.187852 81479 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0113 06:40:20.279175 81479 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0113 06:40:20.375393 81479 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0113 06:40:20.386978 81479 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock I0113 06:40:20.387062 81479 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0113 06:40:20.390960 81479 start.go:472] Will wait 60s for crictl version I0113 06:40:20.391022 81479 ssh_runner.go:195] Run: sudo crictl version I0113 06:40:20.418341 81479 start.go:481] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.20 RuntimeApiVersion: 1.41.0 I0113 06:40:20.418407 81479 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0113 06:40:20.439605 81479 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0113 06:40:20.483666 81479 out.go:204] ๐Ÿณ Preparing Kubernetes v1.25.3 on Docker 20.10.20 ... I0113 06:40:20.484255 81479 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0113 06:40:20.487831 81479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0113 06:40:20.503701 81479 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker I0113 06:40:20.503831 81479 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0113 06:40:20.526350 81479 docker.go:613] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0113 06:40:20.526361 81479 docker.go:543] Images already preloaded, skipping extraction I0113 06:40:20.526881 81479 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0113 06:40:20.549803 81479 docker.go:613] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.25.3 registry.k8s.io/kube-controller-manager:v1.25.3 registry.k8s.io/kube-scheduler:v1.25.3 registry.k8s.io/kube-proxy:v1.25.3 registry.k8s.io/pause:3.8 registry.k8s.io/etcd:3.5.4-0 registry.k8s.io/coredns/coredns:v1.9.3 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0113 06:40:20.550441 81479 cache_images.go:84] Images are preloaded, skipping loading I0113 06:40:20.551153 81479 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0113 06:40:20.576105 81479 cni.go:95] Creating CNI manager for "" I0113 06:40:20.576117 81479 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0113 06:40:20.577243 81479 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0113 06:40:20.577270 81479 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false} I0113 06:40:20.577713 81479 kubeadm.go:161] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.64.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.64.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.64.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.25.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0113 06:40:20.578720 81479 kubeadm.go:962] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=minikube --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.2 --runtime-request-timeout=15m [Install] config: {KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0113 06:40:20.578809 81479 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3 I0113 06:40:20.587303 81479 binaries.go:44] Found k8s binaries, skipping transfer I0113 06:40:20.587424 81479 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0113 06:40:20.595038 81479 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (470 bytes) I0113 06:40:20.607769 81479 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0113 06:40:20.620017 81479 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0113 06:40:20.632769 81479 ssh_runner.go:195] Run: grep 192.168.64.2 control-plane.minikube.internal$ /etc/hosts I0113 06:40:20.635295 81479 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0113 06:40:20.646462 81479 certs.go:54] Setting up /Users/ashleylionell/.minikube/profiles/minikube for IP: 192.168.64.2 I0113 06:40:20.646934 81479 certs.go:182] skipping minikubeCA CA generation: /Users/ashleylionell/.minikube/ca.key I0113 06:40:20.647402 81479 certs.go:182] skipping proxyClientCA CA generation: /Users/ashleylionell/.minikube/proxy-client-ca.key I0113 06:40:20.647771 81479 certs.go:302] generating minikube-user signed cert: /Users/ashleylionell/.minikube/profiles/minikube/client.key I0113 06:40:20.649474 81479 crypto.go:68] Generating cert /Users/ashleylionell/.minikube/profiles/minikube/client.crt with IP's: [] I0113 06:40:20.756736 81479 crypto.go:156] Writing cert to /Users/ashleylionell/.minikube/profiles/minikube/client.crt ... I0113 06:40:20.756748 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/client.crt: {Name:mkaf0176a17f94fbd97298be88f66423f814123b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:20.757010 81479 crypto.go:164] Writing key to /Users/ashleylionell/.minikube/profiles/minikube/client.key ... I0113 06:40:20.757016 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/client.key: {Name:mkb196e7d0a7bdc37e9148bea1ae79c00174fe6d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:20.757192 81479 certs.go:302] generating minikube signed cert: /Users/ashleylionell/.minikube/profiles/minikube/apiserver.key.a30f3483 I0113 06:40:20.757218 81479 crypto.go:68] Generating cert /Users/ashleylionell/.minikube/profiles/minikube/apiserver.crt.a30f3483 with IP's: [192.168.64.2 10.96.0.1 127.0.0.1 10.0.0.1] I0113 06:40:21.021507 81479 crypto.go:156] Writing cert to /Users/ashleylionell/.minikube/profiles/minikube/apiserver.crt.a30f3483 ... I0113 06:40:21.021518 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/apiserver.crt.a30f3483: {Name:mk783bac40b0a0bf8b4f3be9c6e16375811d874a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:21.021769 81479 crypto.go:164] Writing key to /Users/ashleylionell/.minikube/profiles/minikube/apiserver.key.a30f3483 ... I0113 06:40:21.021774 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/apiserver.key.a30f3483: {Name:mkbd6ab674c4b2c24b9a842f1c417fd092a96ef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:21.022038 81479 certs.go:320] copying /Users/ashleylionell/.minikube/profiles/minikube/apiserver.crt.a30f3483 -> /Users/ashleylionell/.minikube/profiles/minikube/apiserver.crt I0113 06:40:21.022661 81479 certs.go:324] copying /Users/ashleylionell/.minikube/profiles/minikube/apiserver.key.a30f3483 -> /Users/ashleylionell/.minikube/profiles/minikube/apiserver.key I0113 06:40:21.022815 81479 certs.go:302] generating aggregator signed cert: /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.key I0113 06:40:21.022832 81479 crypto.go:68] Generating cert /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0113 06:40:21.111169 81479 crypto.go:156] Writing cert to /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.crt ... I0113 06:40:21.111178 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.crt: {Name:mk52ba78c71e924b424465f3f2f1c97851cdbd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:21.111399 81479 crypto.go:164] Writing key to /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.key ... I0113 06:40:21.111404 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.key: {Name:mk05cb63753765507fa4dda2219de472e07ee1d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:21.111857 81479 certs.go:388] found cert: /Users/ashleylionell/.minikube/certs/Users/ashleylionell/.minikube/certs/ca-key.pem (1679 bytes) I0113 06:40:21.112139 81479 certs.go:388] found cert: /Users/ashleylionell/.minikube/certs/Users/ashleylionell/.minikube/certs/ca.pem (1094 bytes) I0113 06:40:21.112444 81479 certs.go:388] found cert: /Users/ashleylionell/.minikube/certs/Users/ashleylionell/.minikube/certs/cert.pem (1139 bytes) I0113 06:40:21.112722 81479 certs.go:388] found cert: /Users/ashleylionell/.minikube/certs/Users/ashleylionell/.minikube/certs/key.pem (1675 bytes) I0113 06:40:21.119056 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0113 06:40:21.137659 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0113 06:40:21.158287 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0113 06:40:21.179730 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0113 06:40:21.203630 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0113 06:40:21.227607 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0113 06:40:21.246803 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0113 06:40:21.266646 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0113 06:40:21.287526 81479 ssh_runner.go:362] scp /Users/ashleylionell/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0113 06:40:21.306504 81479 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0113 06:40:21.319332 81479 ssh_runner.go:195] Run: openssl version I0113 06:40:21.323232 81479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0113 06:40:21.330447 81479 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0113 06:40:21.333745 81479 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 11 23:40 /usr/share/ca-certificates/minikubeCA.pem I0113 06:40:21.333879 81479 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0113 06:40:21.337749 81479 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0113 06:40:21.345079 81479 kubeadm.go:396] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.28.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} I0113 06:40:21.345225 81479 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0113 06:40:21.362541 81479 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0113 06:40:21.370019 81479 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0113 06:40:21.376942 81479 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0113 06:40:21.384685 81479 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0113 06:40:21.385056 81479 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0113 06:40:21.421225 81479 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3 I0113 06:40:21.421277 81479 kubeadm.go:317] [preflight] Running pre-flight checks I0113 06:40:21.520270 81479 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster I0113 06:40:21.520367 81479 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection I0113 06:40:21.520464 81479 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0113 06:40:21.637937 81479 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0113 06:40:21.657814 81479 out.go:204] โ–ช Generating certificates and keys ... I0113 06:40:21.657938 81479 kubeadm.go:317] [certs] Using existing ca certificate authority I0113 06:40:21.658004 81479 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk I0113 06:40:21.981063 81479 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key I0113 06:40:22.209925 81479 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key I0113 06:40:22.371333 81479 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key I0113 06:40:22.465599 81479 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key I0113 06:40:22.555287 81479 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key I0113 06:40:22.555605 81479 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.64.2 127.0.0.1 ::1] I0113 06:40:23.062641 81479 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key I0113 06:40:23.062796 81479 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.64.2 127.0.0.1 ::1] I0113 06:40:23.224686 81479 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key I0113 06:40:23.311375 81479 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key I0113 06:40:23.387409 81479 kubeadm.go:317] [certs] Generating "sa" key and public key I0113 06:40:23.387515 81479 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0113 06:40:23.536116 81479 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file I0113 06:40:23.846688 81479 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0113 06:40:23.937428 81479 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0113 06:40:24.100885 81479 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0113 06:40:24.113221 81479 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0113 06:40:24.113345 81479 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0113 06:40:24.113411 81479 kubeadm.go:317] [kubelet-start] Starting the kubelet I0113 06:40:24.210670 81479 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0113 06:40:24.251107 81479 out.go:204] โ–ช Booting up control plane ... I0113 06:40:24.251243 81479 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver" I0113 06:40:24.251366 81479 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0113 06:40:24.251430 81479 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler" I0113 06:40:24.251521 81479 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0113 06:40:24.251675 81479 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0113 06:40:36.711576 81479 kubeadm.go:317] [apiclient] All control plane components are healthy after 12.503133 seconds I0113 06:40:36.711717 81479 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0113 06:40:36.725557 81479 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0113 06:40:38.244472 81479 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs I0113 06:40:38.244702 81479 kubeadm.go:317] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0113 06:40:38.754137 81479 kubeadm.go:317] [bootstrap-token] Using token: dah2c7.qa1kvq2tbk9m9uu9 I0113 06:40:38.808998 81479 out.go:204] โ–ช Configuring RBAC rules ... I0113 06:40:38.809409 81479 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0113 06:40:38.809760 81479 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0113 06:40:38.812367 81479 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0113 06:40:38.815280 81479 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0113 06:40:38.818169 81479 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0113 06:40:38.821945 81479 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0113 06:40:38.841303 81479 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0113 06:40:39.070154 81479 kubeadm.go:317] [addons] Applied essential addon: CoreDNS I0113 06:40:39.167846 81479 kubeadm.go:317] [addons] Applied essential addon: kube-proxy I0113 06:40:39.169143 81479 kubeadm.go:317] I0113 06:40:39.169216 81479 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully! I0113 06:40:39.169220 81479 kubeadm.go:317] I0113 06:40:39.169301 81479 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user: I0113 06:40:39.169307 81479 kubeadm.go:317] I0113 06:40:39.169340 81479 kubeadm.go:317] mkdir -p $HOME/.kube I0113 06:40:39.169434 81479 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0113 06:40:39.169493 81479 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0113 06:40:39.169501 81479 kubeadm.go:317] I0113 06:40:39.169561 81479 kubeadm.go:317] Alternatively, if you are the root user, you can run: I0113 06:40:39.169570 81479 kubeadm.go:317] I0113 06:40:39.169615 81479 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf I0113 06:40:39.169620 81479 kubeadm.go:317] I0113 06:40:39.169684 81479 kubeadm.go:317] You should now deploy a pod network to the cluster. I0113 06:40:39.169772 81479 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0113 06:40:39.169847 81479 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0113 06:40:39.169852 81479 kubeadm.go:317] I0113 06:40:39.170145 81479 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities I0113 06:40:39.170223 81479 kubeadm.go:317] and service account keys on each node and then running the following as root: I0113 06:40:39.170239 81479 kubeadm.go:317] I0113 06:40:39.170358 81479 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token dah2c7.qa1kvq2tbk9m9uu9 \ I0113 06:40:39.170471 81479 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:a8a50885bd29b148f780424787520824ac459c31cec31b7a6f1f768fd044a951 \ I0113 06:40:39.170528 81479 kubeadm.go:317] --control-plane I0113 06:40:39.170534 81479 kubeadm.go:317] I0113 06:40:39.170633 81479 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root: I0113 06:40:39.170643 81479 kubeadm.go:317] I0113 06:40:39.170781 81479 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token dah2c7.qa1kvq2tbk9m9uu9 \ I0113 06:40:39.170931 81479 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:a8a50885bd29b148f780424787520824ac459c31cec31b7a6f1f768fd044a951 I0113 06:40:39.172936 81479 kubeadm.go:317] W0113 01:10:21.596807 1247 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! I0113 06:40:39.173051 81479 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0113 06:40:39.173067 81479 cni.go:95] Creating CNI manager for "" I0113 06:40:39.173072 81479 cni.go:169] CNI unnecessary in this configuration, recommending no CNI I0113 06:40:39.173090 81479 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0113 06:40:39.173198 81479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=986b1ebd987211ed16f8cc10aed7d2c42fc8392f minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2023_01_13T06_40_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0113 06:40:39.173200 81479 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0113 06:40:39.435868 81479 ops.go:34] apiserver oom_adj: -16 I0113 06:40:39.435943 81479 kubeadm.go:1067] duration metric: took 262.826658ms to wait for elevateKubeSystemPrivileges. I0113 06:40:39.436025 81479 kubeadm.go:398] StartCluster complete in 18.090949985s I0113 06:40:39.436397 81479 settings.go:142] acquiring lock: {Name:mk64acab2c490cca420f3f405457561674f1afa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:39.436490 81479 settings.go:150] Updating kubeconfig: /Users/ashleylionell/.kube/config I0113 06:40:39.439327 81479 lock.go:35] WriteFile acquiring /Users/ashleylionell/.kube/config: {Name:mk2c2109c05dc32028ead007749f43c979eb70df Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0113 06:40:39.982466 81479 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0113 06:40:39.982548 81479 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0113 06:40:40.003235 81479 out.go:177] ๐Ÿ”Ž Verifying Kubernetes components... I0113 06:40:39.983291 81479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0113 06:40:39.983430 81479 config.go:180] Loaded profile config "minikube": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3 I0113 06:40:39.983529 81479 addons.go:486] enableAddons start: toEnable=map[], additional=[] I0113 06:40:40.042144 81479 addons.go:65] Setting default-storageclass=true in profile "minikube" I0113 06:40:40.042146 81479 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0113 06:40:40.042185 81479 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0113 06:40:40.042200 81479 addons.go:227] Setting addon storage-provisioner=true in "minikube" W0113 06:40:40.042207 81479 addons.go:236] addon storage-provisioner should already be in state true I0113 06:40:40.042463 81479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0113 06:40:40.043423 81479 main.go:134] libmachine: Found binary path at /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:40:40.043452 81479 main.go:134] libmachine: Launching plugin server for driver hyperkit I0113 06:40:40.044902 81479 host.go:66] Checking if "minikube" exists ... I0113 06:40:40.046824 81479 main.go:134] libmachine: Found binary path at /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:40:40.046918 81479 main.go:134] libmachine: Launching plugin server for driver hyperkit I0113 06:40:40.059282 81479 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61361 I0113 06:40:40.059893 81479 main.go:134] libmachine: () Calling .GetVersion I0113 06:40:40.060438 81479 main.go:134] libmachine: Using API Version 1 I0113 06:40:40.060450 81479 main.go:134] libmachine: () Calling .SetConfigRaw I0113 06:40:40.060516 81479 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61363 I0113 06:40:40.060768 81479 main.go:134] libmachine: () Calling .GetMachineName I0113 06:40:40.060912 81479 main.go:134] libmachine: (minikube) Calling .GetState I0113 06:40:40.061057 81479 main.go:134] libmachine: () Calling .GetVersion I0113 06:40:40.061187 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:40.061534 81479 main.go:134] libmachine: Using API Version 1 I0113 06:40:40.061543 81479 main.go:134] libmachine: () Calling .SetConfigRaw I0113 06:40:40.061552 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:40.061869 81479 main.go:134] libmachine: () Calling .GetMachineName I0113 06:40:40.062367 81479 main.go:134] libmachine: Found binary path at /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:40:40.062393 81479 main.go:134] libmachine: Launching plugin server for driver hyperkit I0113 06:40:40.073967 81479 addons.go:227] Setting addon default-storageclass=true in "minikube" W0113 06:40:40.073983 81479 addons.go:236] addon default-storageclass should already be in state true I0113 06:40:40.074003 81479 host.go:66] Checking if "minikube" exists ... I0113 06:40:40.074372 81479 main.go:134] libmachine: Found binary path at /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:40:40.074407 81479 main.go:134] libmachine: Launching plugin server for driver hyperkit I0113 06:40:40.075514 81479 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61365 I0113 06:40:40.076874 81479 main.go:134] libmachine: () Calling .GetVersion I0113 06:40:40.077726 81479 main.go:134] libmachine: Using API Version 1 I0113 06:40:40.077738 81479 main.go:134] libmachine: () Calling .SetConfigRaw I0113 06:40:40.078141 81479 main.go:134] libmachine: () Calling .GetMachineName I0113 06:40:40.078269 81479 main.go:134] libmachine: (minikube) Calling .GetState I0113 06:40:40.078397 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:40.078514 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:40.080568 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:40.099662 81479 out.go:177] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0113 06:40:40.085719 81479 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61367 I0113 06:40:40.090788 81479 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0113 06:40:40.100160 81479 main.go:134] libmachine: () Calling .GetVersion I0113 06:40:40.119080 81479 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml I0113 06:40:40.119098 81479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0113 06:40:40.119126 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:40.119430 81479 api_server.go:51] waiting for apiserver process to appear ... I0113 06:40:40.119617 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:40.119712 81479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0113 06:40:40.119975 81479 main.go:134] libmachine: Using API Version 1 I0113 06:40:40.119978 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:40.119989 81479 main.go:134] libmachine: () Calling .SetConfigRaw I0113 06:40:40.120237 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:40.120489 81479 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/ashleylionell/.minikube/machines/minikube/id_rsa Username:docker} I0113 06:40:40.120577 81479 main.go:134] libmachine: () Calling .GetMachineName I0113 06:40:40.121492 81479 main.go:134] libmachine: Found binary path at /Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit I0113 06:40:40.121553 81479 main.go:134] libmachine: Launching plugin server for driver hyperkit I0113 06:40:40.139132 81479 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:61370 I0113 06:40:40.139646 81479 main.go:134] libmachine: () Calling .GetVersion I0113 06:40:40.140678 81479 main.go:134] libmachine: Using API Version 1 I0113 06:40:40.140695 81479 main.go:134] libmachine: () Calling .SetConfigRaw I0113 06:40:40.141089 81479 main.go:134] libmachine: () Calling .GetMachineName I0113 06:40:40.141259 81479 main.go:134] libmachine: (minikube) Calling .GetState I0113 06:40:40.141406 81479 main.go:134] libmachine: (minikube) DBG | exe=/Users/ashleylionell/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0113 06:40:40.141546 81479 main.go:134] libmachine: (minikube) DBG | hyperkit pid from json: 81552 I0113 06:40:40.143624 81479 main.go:134] libmachine: (minikube) Calling .DriverName I0113 06:40:40.143885 81479 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml I0113 06:40:40.143892 81479 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0113 06:40:40.143903 81479 main.go:134] libmachine: (minikube) Calling .GetSSHHostname I0113 06:40:40.144019 81479 main.go:134] libmachine: (minikube) Calling .GetSSHPort I0113 06:40:40.144165 81479 main.go:134] libmachine: (minikube) Calling .GetSSHKeyPath I0113 06:40:40.144283 81479 main.go:134] libmachine: (minikube) Calling .GetSSHUsername I0113 06:40:40.144396 81479 sshutil.go:53] new ssh client: &{IP:192.168.64.2 Port:22 SSHKeyPath:/Users/ashleylionell/.minikube/machines/minikube/id_rsa Username:docker} I0113 06:40:40.234565 81479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0113 06:40:40.247261 81479 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0113 06:40:41.063298 81479 start.go:826] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS I0113 06:40:41.063323 81479 api_server.go:71] duration metric: took 1.080746189s to wait for apiserver process to appear ... I0113 06:40:41.063340 81479 api_server.go:87] waiting for apiserver healthz status ... I0113 06:40:41.063362 81479 api_server.go:252] Checking apiserver healthz at https://192.168.64.2:8443/healthz ... I0113 06:40:41.063370 81479 main.go:134] libmachine: Making call to close driver server I0113 06:40:41.063378 81479 main.go:134] libmachine: (minikube) Calling .Close I0113 06:40:41.063569 81479 main.go:134] libmachine: Successfully made call to close driver server I0113 06:40:41.063574 81479 main.go:134] libmachine: Making call to close connection to plugin binary I0113 06:40:41.063581 81479 main.go:134] libmachine: Making call to close driver server I0113 06:40:41.063585 81479 main.go:134] libmachine: (minikube) Calling .Close I0113 06:40:41.063780 81479 main.go:134] libmachine: (minikube) DBG | Closing plugin on server side I0113 06:40:41.063790 81479 main.go:134] libmachine: Successfully made call to close driver server I0113 06:40:41.063799 81479 main.go:134] libmachine: Making call to close connection to plugin binary I0113 06:40:41.063816 81479 main.go:134] libmachine: Making call to close driver server I0113 06:40:41.063820 81479 main.go:134] libmachine: (minikube) Calling .Close I0113 06:40:41.063998 81479 main.go:134] libmachine: (minikube) DBG | Closing plugin on server side I0113 06:40:41.064002 81479 main.go:134] libmachine: Successfully made call to close driver server I0113 06:40:41.064008 81479 main.go:134] libmachine: Making call to close connection to plugin binary I0113 06:40:41.073076 81479 api_server.go:278] https://192.168.64.2:8443/healthz returned 200: ok I0113 06:40:41.074379 81479 api_server.go:140] control plane version: v1.25.3 I0113 06:40:41.074387 81479 api_server.go:130] duration metric: took 11.043635ms to wait for apiserver health ... I0113 06:40:41.074397 81479 system_pods.go:43] waiting for kube-system pods to appear ... I0113 06:40:41.080790 81479 main.go:134] libmachine: Making call to close driver server I0113 06:40:41.080806 81479 main.go:134] libmachine: (minikube) Calling .Close I0113 06:40:41.080983 81479 main.go:134] libmachine: Successfully made call to close driver server I0113 06:40:41.080991 81479 main.go:134] libmachine: Making call to close connection to plugin binary I0113 06:40:41.080996 81479 main.go:134] libmachine: Making call to close driver server I0113 06:40:41.081001 81479 main.go:134] libmachine: (minikube) Calling .Close I0113 06:40:41.081003 81479 main.go:134] libmachine: (minikube) DBG | Closing plugin on server side I0113 06:40:41.081224 81479 main.go:134] libmachine: (minikube) DBG | Closing plugin on server side I0113 06:40:41.081235 81479 main.go:134] libmachine: Successfully made call to close driver server I0113 06:40:41.081244 81479 main.go:134] libmachine: Making call to close connection to plugin binary I0113 06:40:41.083905 81479 system_pods.go:59] 5 kube-system pods found I0113 06:40:41.127199 81479 out.go:177] ๐ŸŒŸ Enabled addons: default-storageclass, storage-provisioner I0113 06:40:41.127204 81479 system_pods.go:61] "etcd-minikube" [ba88fdf2-aefc-4b94-9e64-f60246bf6514] Pending I0113 06:40:41.127209 81479 system_pods.go:61] "kube-apiserver-minikube" [be92db46-477d-4c32-8931-eb8375798f29] Pending I0113 06:40:41.127213 81479 system_pods.go:61] "kube-controller-manager-minikube" [3162fcdc-683a-4876-86c2-edf597262637] Pending I0113 06:40:41.127215 81479 system_pods.go:61] "kube-scheduler-minikube" [de5e0c44-f607-4ee1-8da0-71a738615abc] Pending I0113 06:40:41.152157 81479 addons.go:488] enableAddons completed in 1.169124058s I0113 06:40:41.152172 81479 system_pods.go:61] "storage-provisioner" [84c0d349-2b78-4527-82f0-fdd962086ddd] Pending I0113 06:40:41.152197 81479 system_pods.go:74] duration metric: took 77.795316ms to wait for pod list to return data ... I0113 06:40:41.152230 81479 kubeadm.go:573] duration metric: took 1.169625728s to wait for : map[apiserver:true system_pods:true] ... I0113 06:40:41.152247 81479 node_conditions.go:102] verifying NodePressure condition ... I0113 06:40:41.158236 81479 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki I0113 06:40:41.158763 81479 node_conditions.go:123] node cpu capacity is 2 I0113 06:40:41.158937 81479 node_conditions.go:105] duration metric: took 6.636722ms to run NodePressure ... I0113 06:40:41.158946 81479 start.go:217] waiting for startup goroutines ... I0113 06:40:41.159375 81479 ssh_runner.go:195] Run: rm -f paused I0113 06:40:41.368502 81479 start.go:506] kubectl: 1.26.0, cluster: 1.25.3 (minor skew: 1) I0113 06:40:41.389158 81479 out.go:177] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Journal begins at Fri 2023-01-13 01:10:05 UTC, ends at Sat 2023-01-14 00:20:12 UTC. -- Jan 14 00:02:32 minikube dockerd[974]: time="2023-01-14T00:02:32.889939919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:02:32 minikube dockerd[974]: time="2023-01-14T00:02:32.890896161Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a7423d90119d3ed67ffcc331a30e3ef1bcec6e22fb25c84f436a3ad64e89412b pid=405310 runtime=io.containerd.runc.v2 Jan 14 00:02:51 minikube dockerd[974]: time="2023-01-14T00:02:51.611892641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:02:51 minikube dockerd[974]: time="2023-01-14T00:02:51.612362443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:02:51 minikube dockerd[974]: time="2023-01-14T00:02:51.612539355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:02:51 minikube dockerd[974]: time="2023-01-14T00:02:51.613315332Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6ea4a1e63bad3cd3ef1ab9f821cfb8dc0058aa167d132cdb9edbb3be9dcaa0f5 pid=405527 runtime=io.containerd.runc.v2 Jan 14 00:02:57 minikube dockerd[968]: time="2023-01-14T00:02:57.307346501Z" level=info msg="ignoring event" container=6ea4a1e63bad3cd3ef1ab9f821cfb8dc0058aa167d132cdb9edbb3be9dcaa0f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 14 00:02:57 minikube dockerd[974]: time="2023-01-14T00:02:57.308352631Z" level=info msg="shim disconnected" id=6ea4a1e63bad3cd3ef1ab9f821cfb8dc0058aa167d132cdb9edbb3be9dcaa0f5 Jan 14 00:02:57 minikube dockerd[974]: time="2023-01-14T00:02:57.308458929Z" level=warning msg="cleaning up after shim disconnected" id=6ea4a1e63bad3cd3ef1ab9f821cfb8dc0058aa167d132cdb9edbb3be9dcaa0f5 namespace=moby Jan 14 00:02:57 minikube dockerd[974]: time="2023-01-14T00:02:57.308488201Z" level=info msg="cleaning up dead shim" Jan 14 00:02:57 minikube dockerd[974]: time="2023-01-14T00:02:57.326687370Z" level=warning msg="cleanup warnings time=\"2023-01-14T00:02:57Z\" level=info msg=\"starting signal loop\" namespace=moby pid=405704 runtime=io.containerd.runc.v2\n" Jan 14 00:05:24 minikube dockerd[974]: time="2023-01-14T00:05:24.355590046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:05:24 minikube dockerd[974]: time="2023-01-14T00:05:24.355665289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:05:24 minikube dockerd[974]: time="2023-01-14T00:05:24.356126636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:05:24 minikube dockerd[974]: time="2023-01-14T00:05:24.356737570Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2eb816cccda975a19e9042e14e0933b90e27c506cf5a4a2a308e3940cdb2cd7d pid=407424 runtime=io.containerd.runc.v2 Jan 14 00:05:26 minikube dockerd[968]: time="2023-01-14T00:05:26.686835150Z" level=info msg="ignoring event" container=2eb816cccda975a19e9042e14e0933b90e27c506cf5a4a2a308e3940cdb2cd7d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 14 00:05:26 minikube dockerd[974]: time="2023-01-14T00:05:26.686484712Z" level=info msg="shim disconnected" id=2eb816cccda975a19e9042e14e0933b90e27c506cf5a4a2a308e3940cdb2cd7d Jan 14 00:05:26 minikube dockerd[974]: time="2023-01-14T00:05:26.687201476Z" level=warning msg="cleaning up after shim disconnected" id=2eb816cccda975a19e9042e14e0933b90e27c506cf5a4a2a308e3940cdb2cd7d namespace=moby Jan 14 00:05:26 minikube dockerd[974]: time="2023-01-14T00:05:26.687256534Z" level=info msg="cleaning up dead shim" Jan 14 00:05:26 minikube dockerd[974]: time="2023-01-14T00:05:26.697714815Z" level=warning msg="cleanup warnings time=\"2023-01-14T00:05:26Z\" level=info msg=\"starting signal loop\" namespace=moby pid=407477 runtime=io.containerd.runc.v2\n" Jan 14 00:06:09 minikube dockerd[974]: time="2023-01-14T00:06:09.749949591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:06:09 minikube dockerd[974]: time="2023-01-14T00:06:09.750023449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:06:09 minikube dockerd[974]: time="2023-01-14T00:06:09.750055431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:06:09 minikube dockerd[974]: time="2023-01-14T00:06:09.751037378Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f13d13f8522e141640ec9994fd43051dc9da81db0c1a23f1689c660b29016735 pid=407933 runtime=io.containerd.runc.v2 Jan 14 00:10:08 minikube dockerd[968]: time="2023-01-14T00:10:08.292524548Z" level=info msg="ignoring event" container=f13d13f8522e141640ec9994fd43051dc9da81db0c1a23f1689c660b29016735 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 14 00:10:08 minikube dockerd[974]: time="2023-01-14T00:10:08.293818557Z" level=info msg="shim disconnected" id=f13d13f8522e141640ec9994fd43051dc9da81db0c1a23f1689c660b29016735 Jan 14 00:10:08 minikube dockerd[974]: time="2023-01-14T00:10:08.293891512Z" level=warning msg="cleaning up after shim disconnected" id=f13d13f8522e141640ec9994fd43051dc9da81db0c1a23f1689c660b29016735 namespace=moby Jan 14 00:10:08 minikube dockerd[974]: time="2023-01-14T00:10:08.293914098Z" level=info msg="cleaning up dead shim" Jan 14 00:10:08 minikube dockerd[974]: time="2023-01-14T00:10:08.306763443Z" level=warning msg="cleanup warnings time=\"2023-01-14T00:10:08Z\" level=info msg=\"starting signal loop\" namespace=moby pid=410799 runtime=io.containerd.runc.v2\n" Jan 14 00:10:12 minikube dockerd[974]: time="2023-01-14T00:10:12.526276237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:10:12 minikube dockerd[974]: time="2023-01-14T00:10:12.527412558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:10:12 minikube dockerd[974]: time="2023-01-14T00:10:12.527475520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:10:12 minikube dockerd[974]: time="2023-01-14T00:10:12.528271844Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/504dcbe733073390c3ad95eecb308e6ae7b309420f44a4b6b5d4cd7cfe6352f9 pid=410826 runtime=io.containerd.runc.v2 Jan 14 00:10:16 minikube dockerd[974]: time="2023-01-14T00:10:16.175039162Z" level=info msg="shim disconnected" id=504dcbe733073390c3ad95eecb308e6ae7b309420f44a4b6b5d4cd7cfe6352f9 Jan 14 00:10:16 minikube dockerd[968]: time="2023-01-14T00:10:16.175574648Z" level=info msg="ignoring event" container=504dcbe733073390c3ad95eecb308e6ae7b309420f44a4b6b5d4cd7cfe6352f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 14 00:10:16 minikube dockerd[974]: time="2023-01-14T00:10:16.176467724Z" level=warning msg="cleaning up after shim disconnected" id=504dcbe733073390c3ad95eecb308e6ae7b309420f44a4b6b5d4cd7cfe6352f9 namespace=moby Jan 14 00:10:16 minikube dockerd[974]: time="2023-01-14T00:10:16.176822516Z" level=info msg="cleaning up dead shim" Jan 14 00:10:16 minikube dockerd[974]: time="2023-01-14T00:10:16.194069102Z" level=warning msg="cleanup warnings time=\"2023-01-14T00:10:16Z\" level=info msg=\"starting signal loop\" namespace=moby pid=410875 runtime=io.containerd.runc.v2\n" Jan 14 00:10:23 minikube dockerd[974]: time="2023-01-14T00:10:23.745926991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:10:23 minikube dockerd[974]: time="2023-01-14T00:10:23.746023336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:10:23 minikube dockerd[974]: time="2023-01-14T00:10:23.746054918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:10:23 minikube dockerd[974]: time="2023-01-14T00:10:23.746365941Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/35f6bd649d0cb8e120358101577f1c738a68750b03c3a814b7794a6f42ac022b pid=411028 runtime=io.containerd.runc.v2 Jan 14 00:10:25 minikube dockerd[974]: time="2023-01-14T00:10:25.126366448Z" level=info msg="shim disconnected" id=35f6bd649d0cb8e120358101577f1c738a68750b03c3a814b7794a6f42ac022b Jan 14 00:10:25 minikube dockerd[968]: time="2023-01-14T00:10:25.126963710Z" level=info msg="ignoring event" container=35f6bd649d0cb8e120358101577f1c738a68750b03c3a814b7794a6f42ac022b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 14 00:10:25 minikube dockerd[974]: time="2023-01-14T00:10:25.127799938Z" level=warning msg="cleaning up after shim disconnected" id=35f6bd649d0cb8e120358101577f1c738a68750b03c3a814b7794a6f42ac022b namespace=moby Jan 14 00:10:25 minikube dockerd[974]: time="2023-01-14T00:10:25.127829021Z" level=info msg="cleaning up dead shim" Jan 14 00:10:25 minikube dockerd[974]: time="2023-01-14T00:10:25.144063249Z" level=warning msg="cleanup warnings time=\"2023-01-14T00:10:25Z\" level=info msg=\"starting signal loop\" namespace=moby pid=411071 runtime=io.containerd.runc.v2\n" Jan 14 00:14:36 minikube dockerd[974]: time="2023-01-14T00:14:36.718089373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:14:36 minikube dockerd[974]: time="2023-01-14T00:14:36.718263872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:14:36 minikube dockerd[974]: time="2023-01-14T00:14:36.718320639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:14:36 minikube dockerd[974]: time="2023-01-14T00:14:36.718911697Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2d4dbcc04e6f97924697e26ac9e8ab44d117a7d44aea7a91ce3b5f195a6086ea pid=413929 runtime=io.containerd.runc.v2 Jan 14 00:14:38 minikube dockerd[974]: time="2023-01-14T00:14:38.380341347Z" level=info msg="shim disconnected" id=2d4dbcc04e6f97924697e26ac9e8ab44d117a7d44aea7a91ce3b5f195a6086ea Jan 14 00:14:38 minikube dockerd[974]: time="2023-01-14T00:14:38.382594154Z" level=warning msg="cleaning up after shim disconnected" id=2d4dbcc04e6f97924697e26ac9e8ab44d117a7d44aea7a91ce3b5f195a6086ea namespace=moby Jan 14 00:14:38 minikube dockerd[974]: time="2023-01-14T00:14:38.382711087Z" level=info msg="cleaning up dead shim" Jan 14 00:14:38 minikube dockerd[968]: time="2023-01-14T00:14:38.383002720Z" level=info msg="ignoring event" container=2d4dbcc04e6f97924697e26ac9e8ab44d117a7d44aea7a91ce3b5f195a6086ea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Jan 14 00:14:38 minikube dockerd[974]: time="2023-01-14T00:14:38.400043344Z" level=warning msg="cleanup warnings time=\"2023-01-14T00:14:38Z\" level=info msg=\"starting signal loop\" namespace=moby pid=414098 runtime=io.containerd.runc.v2\n" Jan 14 00:14:58 minikube dockerd[974]: time="2023-01-14T00:14:58.858019904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 00:14:58 minikube dockerd[974]: time="2023-01-14T00:14:58.858144570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 00:14:58 minikube dockerd[974]: time="2023-01-14T00:14:58.858170201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 00:14:58 minikube dockerd[974]: time="2023-01-14T00:14:58.858763408Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f246cce2ff5350df8e867d96a55e31d58611c16e608a4cb503da8172712c32f5 pid=414282 runtime=io.containerd.runc.v2 * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID a7423d90119d3 gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 17 minutes ago Running registry-proxy 0 8cb153ea389b4 34f86ce775dad registry@sha256:83bb78d7b28f1ac99c68133af32c93e9a1c149bcd3cb6e683a3ee56e312f1c96 17 minutes ago Running registry 0 32e297e1cf5e0 1796bdb3bb9a6 de92c5f8374c2 20 minutes ago Running grid-node-chrome 0 f1e73dc0cdaef 2206b81e233a4 selenium/hub@sha256:db3027f826ad1ec27434d4c5a128dec83ce2ec7dc8edcc0aeaa02c19ccd2e2fa 6 hours ago Running selenium-hub 0 d881dd086b8fa 4ccb70ed3ea41 registry.k8s.io/e2e-test-images/jessie-dnsutils@sha256:8b03e4185ecd305bc9b410faac15d486a3b1ef1946196d429245cdd3c7b152eb 23 hours ago Running dnsutils 0 6ab57dbb8794e 64ff8adc6160b gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f 23 hours ago Running minikube-ingress-dns 0 818fe08470136 ea6aa89b25b66 k8s.gcr.io/ingress-nginx/controller@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8 23 hours ago Running controller 0 e6937e702740e 9ac9f51ea88e1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 23 hours ago Exited patch 0 628578b6c0759 7eaa8c463b818 k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 23 hours ago Exited create 0 8754472697fe9 45769dcfc5312 6e38f40d628db 23 hours ago Running storage-provisioner 1 9eef21ad73a39 9765fb94cfcb1 5185b96f0becf 23 hours ago Running coredns 0 d9fd8f335756a bf20f17cbb061 6e38f40d628db 23 hours ago Exited storage-provisioner 0 9eef21ad73a39 e9a11cba64b7e beaaf00edd38a 23 hours ago Running kube-proxy 0 4bcd0d8d46f85 608c1a4286090 a8a176a5d5d69 23 hours ago Running etcd 0 5584fb04d097a 2d6d1da7db275 0346dbd74bcb9 23 hours ago Running kube-apiserver 0 72115fac94184 58573efc523c6 6d23ec0e8b87e 23 hours ago Running kube-scheduler 0 49ba2628786e8 cf00a2a9118c4 6039992312758 23 hours ago Running kube-controller-manager 0 9f01bd50d22da * * ==> coredns [9765fb94cfcb] <== * .:53 [INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad CoreDNS-1.9.3 linux/amd64, go1.18.2, 45b0a11 [INFO] Reloading [INFO] plugin/health: Going into lameduck mode for 5s [INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea [INFO] Reloading complete [INFO] 127.0.0.1:52845 - 40346 "HINFO IN 4498731539297172728.7777359036586493675. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036444987s [INFO] 172.17.0.1:60359 - 17117 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141973s [INFO] 172.17.0.1:30385 - 58924 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000233927s [INFO] 172.17.0.1:41178 - 53608 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000200956s [INFO] 172.17.0.1:15159 - 62057 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000315591s [INFO] 172.17.0.1:29266 - 59216 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134511s [INFO] 172.17.0.1:22671 - 13018 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000247664s [INFO] 172.17.0.1:35917 - 23930 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000262941s [INFO] 172.17.0.1:58107 - 43409 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000347005s [INFO] 172.17.0.1:64256 - 63111 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157523s [INFO] 172.17.0.1:31587 - 18257 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.002636956s [INFO] 172.17.0.1:61985 - 44578 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000221126s [INFO] 172.17.0.1:51378 - 14278 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00021566s [INFO] 172.17.0.1:54203 - 32531 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189756s [INFO] 172.17.0.1:36488 - 396 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137119s [INFO] 172.17.0.1:2571 - 3785 "A IN grid-hub-svc.default.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220162s [INFO] 172.17.0.1:65221 - 45268 "A IN accounts.google.com.default.svc.cluster.local. udp 63 false 512" NXDOMAIN qr,aa,rd 156 0.000274917s [INFO] 172.17.0.1:24937 - 37823 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.001031812s [INFO] 172.17.0.1:53377 - 8818 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000181779s [INFO] 172.17.0.1:12459 - 7350 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.047624482s [INFO] 172.17.0.1:39201 - 36780 "A IN optimizationguide-pa.googleapis.com.default.svc.cluster.local. udp 79 false 512" NXDOMAIN qr,aa,rd 172 0.000247849s [INFO] 172.17.0.1:30675 - 2652 "A IN optimizationguide-pa.googleapis.com.svc.cluster.local. udp 71 false 512" NXDOMAIN qr,aa,rd 164 0.000192083s [INFO] 172.17.0.1:20841 - 28949 "A IN optimizationguide-pa.googleapis.com.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00022946s [INFO] 172.17.0.1:20035 - 14536 "A IN optimizationguide-pa.googleapis.com. udp 53 false 512" NOERROR qr,rd,ra 869 0.006310451s [INFO] 172.17.0.1:56902 - 63338 "A IN update.googleapis.com.svc.cluster.local. udp 57 false 512" NXDOMAIN qr,aa,rd 150 0.000146421s [INFO] 172.17.0.1:31664 - 12413 "A IN update.googleapis.com.default.svc.cluster.local. udp 65 false 512" NXDOMAIN qr,aa,rd 158 0.000274278s [INFO] 172.17.0.1:11750 - 58668 "A IN update.googleapis.com.cluster.local. udp 53 false 512" NXDOMAIN qr,aa,rd 146 0.000149315s [INFO] 172.17.0.1:27232 - 58354 "A IN update.googleapis.com. udp 39 false 512" NOERROR qr,rd,ra 76 0.037404965s [INFO] 172.17.0.1:42075 - 32244 "A IN edgedl.me.gvt1.com.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024215s [INFO] 172.17.0.1:13727 - 121 "A IN edgedl.me.gvt1.com.svc.cluster.local. udp 54 false 512" NXDOMAIN qr,aa,rd 147 0.000148024s [INFO] 172.17.0.1:44123 - 31587 "A IN edgedl.me.gvt1.com.cluster.local. udp 50 false 512" NXDOMAIN qr,aa,rd 143 0.000185355s [INFO] 172.17.0.1:6231 - 41012 "A IN edgedl.me.gvt1.com. udp 36 false 512" NOERROR qr,rd,ra 70 0.027988928s [INFO] 172.17.0.1:5190 - 36881 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.007424778s [INFO] 172.17.0.1:5190 - 8787 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.014477423s [INFO] 172.17.0.1:47639 - 8889 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000137091s [INFO] 172.17.0.1:47639 - 61570 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127606s [INFO] 172.17.0.1:3721 - 59499 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145678s [INFO] 172.17.0.1:3721 - 56866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000211977s [INFO] 172.17.0.1:19236 - 29322 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000231248s [INFO] 172.17.0.1:19236 - 41205 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00114076s [INFO] 172.17.0.1:27916 - 14705 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105842s [INFO] 172.17.0.1:22590 - 17860 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110535s [INFO] 172.17.0.1:56712 - 12040 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118923s [INFO] 172.17.0.1:54039 - 61757 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093062s * * ==> describe nodes <== * Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=986b1ebd987211ed16f8cc10aed7d2c42fc8392f minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2023_01_13T06_40_39_0700 minikube.k8s.io/version=v1.28.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 13 Jan 2023 01:10:38 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 14 Jan 2023 00:20:09 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 14 Jan 2023 00:18:27 +0000 Fri, 13 Jan 2023 02:15:06 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 14 Jan 2023 00:18:27 +0000 Fri, 13 Jan 2023 02:15:06 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 14 Jan 2023 00:18:27 +0000 Fri, 13 Jan 2023 02:15:06 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 14 Jan 2023 00:18:27 +0000 Fri, 13 Jan 2023 02:15:06 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.64.2 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 3914660Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17784752Ki hugepages-2Mi: 0 memory: 3914660Ki pods: 110 System Info: Machine ID: 3f3aa54e43e440d690d8676df8041131 System UUID: fdf711ed-0000-0000-9b0a-acde48001122 Boot ID: 1584c4da-79b5-4cf7-8ff1-dd566d5ae61b Kernel Version: 5.10.57 OS Image: Buildroot 2021.02.12 Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.20 Kubelet Version: v1.25.3 Kube-Proxy Version: v1.25.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (14 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default dnsutils 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 22h default grid-hub-8cbb5c87d-mmkc4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6h2m default grid-node-chrome-7df9bdb896-4cc75 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20m ingress-nginx ingress-nginx-controller-5959f988fd-6dckp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 90Mi (2%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system coredns-565d847f94-rzr49 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 23h kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system kube-ingress-dns-minikube 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system kube-proxy-rpfwv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23h kube-system registry-2tlp8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system registry-proxy-hz5jg 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 18m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 23h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING) memory 260Mi (6%!)(MISSING) 170Mi (4%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: * * ==> dmesg <== * [Jan13 13:29] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.123295] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173) [ +4.823127] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182) [ +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618) [ +0.008769] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.179915] systemd-fstab-generator[125]: Ignoring "noauto" for root device [ +0.048590] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.) [ +1.989418] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory [ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery [ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2) [ +1.154473] systemd-fstab-generator[545]: Ignoring "noauto" for root device [ +0.091240] systemd-fstab-generator[556]: Ignoring "noauto" for root device [Jan13 13:30] systemd-fstab-generator[774]: Ignoring "noauto" for root device [ +1.672761] kauditd_printk_skb: 16 callbacks suppressed [ +0.301951] systemd-fstab-generator[937]: Ignoring "noauto" for root device [ +0.099046] systemd-fstab-generator[948]: Ignoring "noauto" for root device [ +0.149129] systemd-fstab-generator[959]: Ignoring "noauto" for root device [ +1.369823] systemd-fstab-generator[1109]: Ignoring "noauto" for root device [ +0.095185] systemd-fstab-generator[1120]: Ignoring "noauto" for root device [ +3.839235] systemd-fstab-generator[1320]: Ignoring "noauto" for root device [ +0.616565] kauditd_printk_skb: 68 callbacks suppressed [ +14.148410] systemd-fstab-generator[1997]: Ignoring "noauto" for root device [ +12.522286] kauditd_printk_skb: 8 callbacks suppressed [Jan13 13:31] kauditd_printk_skb: 17 callbacks suppressed [ +17.079998] kauditd_printk_skb: 4 callbacks suppressed [Jan13 13:36] kauditd_printk_skb: 4 callbacks suppressed [ +15.119882] kauditd_printk_skb: 8 callbacks suppressed [Jan13 13:55] kauditd_printk_skb: 4 callbacks suppressed [ +6.883991] kauditd_printk_skb: 4 callbacks suppressed [Jan13 14:34] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large: [ +0.000047] clocksource: 'hpet' wd_now: 6cbca89c wd_last: 6b15944e mask: ffffffff [ +0.000027] clocksource: 'tsc' cs_now: c90fbf33cc1e cs_last: c87725186d1e mask: ffffffffffffffff [ +0.012314] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'. [ +0.105857] clocksource: Checking clocksource tsc synchronization from CPU 1. [Jan13 14:36] hrtimer: interrupt took 1559320 ns [Jan13 18:17] kauditd_printk_skb: 8 callbacks suppressed * * ==> etcd [608c1a428609] <== * {"level":"info","ts":"2023-01-13T22:04:46.149Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":26386,"took":"433.315ยตs"} {"level":"info","ts":"2023-01-13T22:09:46.156Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":26636} {"level":"info","ts":"2023-01-13T22:09:46.158Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":26636,"took":"443.504ยตs"} {"level":"info","ts":"2023-01-13T22:14:46.163Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":26887} {"level":"info","ts":"2023-01-13T22:14:46.164Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":26887,"took":"439.636ยตs"} {"level":"info","ts":"2023-01-13T22:19:46.173Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":27137} {"level":"info","ts":"2023-01-13T22:19:46.174Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":27137,"took":"347.86ยตs"} {"level":"info","ts":"2023-01-13T22:24:46.180Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":27386} {"level":"info","ts":"2023-01-13T22:24:46.182Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":27386,"took":"419.634ยตs"} {"level":"info","ts":"2023-01-13T22:29:46.186Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":27637} {"level":"info","ts":"2023-01-13T22:29:46.187Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":27637,"took":"371.48ยตs"} {"level":"info","ts":"2023-01-13T22:34:46.193Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":27887} {"level":"info","ts":"2023-01-13T22:34:46.194Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":27887,"took":"410.462ยตs"} {"level":"info","ts":"2023-01-13T22:39:46.202Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":28136} {"level":"info","ts":"2023-01-13T22:39:46.203Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":28136,"took":"364.894ยตs"} {"level":"info","ts":"2023-01-13T22:44:46.208Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":28387} {"level":"info","ts":"2023-01-13T22:44:46.209Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":28387,"took":"365.986ยตs"} {"level":"info","ts":"2023-01-13T22:49:46.215Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":28637} {"level":"info","ts":"2023-01-13T22:49:46.217Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":28637,"took":"1.927629ms"} {"level":"info","ts":"2023-01-13T22:54:46.223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":28888} {"level":"info","ts":"2023-01-13T22:54:46.225Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":28888,"took":"360.107ยตs"} {"level":"info","ts":"2023-01-13T22:59:46.231Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":29138} {"level":"info","ts":"2023-01-13T22:59:46.232Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":29138,"took":"467.529ยตs"} {"level":"info","ts":"2023-01-13T23:04:46.241Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":29389} {"level":"info","ts":"2023-01-13T23:04:46.242Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":29389,"took":"484.779ยตs"} {"level":"info","ts":"2023-01-13T23:09:46.249Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":29639} {"level":"info","ts":"2023-01-13T23:09:46.250Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":29639,"took":"363.717ยตs"} {"level":"info","ts":"2023-01-13T23:14:46.257Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":29889} {"level":"info","ts":"2023-01-13T23:14:46.258Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":29889,"took":"584.965ยตs"} {"level":"warn","ts":"2023-01-13T23:15:55.494Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"104.015193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:8"} {"level":"info","ts":"2023-01-13T23:15:55.501Z","caller":"traceutil/trace.go:171","msg":"trace[1849375271] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:30245; }","duration":"115.93662ms","start":"2023-01-13T23:15:55.379Z","end":"2023-01-13T23:15:55.495Z","steps":["trace[1849375271] 'agreement among raft nodes before linearized reading' (duration: 100.284085ms)"],"step_count":1} {"level":"info","ts":"2023-01-13T23:19:46.265Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":30175} {"level":"info","ts":"2023-01-13T23:19:46.267Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":30175,"took":"623.906ยตs"} {"level":"info","ts":"2023-01-13T23:24:46.275Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":30504} {"level":"info","ts":"2023-01-13T23:24:46.277Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":30504,"took":"496.818ยตs"} {"level":"info","ts":"2023-01-13T23:29:46.283Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":30822} {"level":"info","ts":"2023-01-13T23:29:46.284Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":30822,"took":"436.353ยตs"} {"level":"info","ts":"2023-01-13T23:34:46.291Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":31081} {"level":"info","ts":"2023-01-13T23:34:46.292Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":31081,"took":"363.375ยตs"} {"level":"info","ts":"2023-01-13T23:39:46.301Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":31336} {"level":"info","ts":"2023-01-13T23:39:46.302Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":31336,"took":"390.505ยตs"} {"level":"info","ts":"2023-01-13T23:44:46.308Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":31588} {"level":"info","ts":"2023-01-13T23:44:46.309Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":31588,"took":"420.863ยตs"} {"level":"info","ts":"2023-01-13T23:49:46.316Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":31898} {"level":"info","ts":"2023-01-13T23:49:46.317Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":31898,"took":"609.232ยตs"} {"level":"info","ts":"2023-01-13T23:54:42.347Z","caller":"etcdserver/server.go:1383","msg":"triggering snapshot","local-member-id":"77ee71d7bb7b0f2","local-member-applied-index":40004,"local-member-snapshot-index":30003,"local-member-snapshot-count":10000} {"level":"info","ts":"2023-01-13T23:54:42.356Z","caller":"etcdserver/server.go:2394","msg":"saved snapshot","snapshot-index":40004} {"level":"info","ts":"2023-01-13T23:54:42.358Z","caller":"etcdserver/server.go:2424","msg":"compacted Raft logs","compact-index":35004} {"level":"info","ts":"2023-01-13T23:54:46.324Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":32153} {"level":"info","ts":"2023-01-13T23:54:46.324Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":32153,"took":"443.173ยตs"} {"level":"info","ts":"2023-01-13T23:59:46.331Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":32407} {"level":"info","ts":"2023-01-13T23:59:46.333Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":32407,"took":"373.191ยตs"} {"level":"info","ts":"2023-01-14T00:04:46.339Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":32790} {"level":"info","ts":"2023-01-14T00:04:46.341Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":32790,"took":"1.19385ms"} {"level":"info","ts":"2023-01-14T00:09:46.349Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":33091} {"level":"info","ts":"2023-01-14T00:09:46.350Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":33091,"took":"586.776ยตs"} {"level":"info","ts":"2023-01-14T00:14:46.357Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":33343} {"level":"info","ts":"2023-01-14T00:14:46.360Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":33343,"took":"903.696ยตs"} {"level":"info","ts":"2023-01-14T00:19:46.364Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":33596} {"level":"info","ts":"2023-01-14T00:19:46.365Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":33596,"took":"427.635ยตs"} * * ==> kernel <== * 00:20:13 up 10:50, 0 users, load average: 1.18, 1.04, 0.87 Linux minikube 5.10.57 #1 SMP Fri Oct 28 21:02:11 UTC 2022 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2021.02.12" * * ==> kube-apiserver [2d6d1da7db27] <== * I0113 01:10:35.193575 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister I0113 01:10:35.193790 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0113 01:10:35.262702 1 controller.go:616] quota admission added evaluator for: namespaces I0113 01:10:35.265185 1 apf_controller.go:305] Running API Priority and Fairness config worker I0113 01:10:35.267325 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller I0113 01:10:35.267456 1 cache.go:39] Caches are synced for AvailableConditionController controller I0113 01:10:35.267341 1 shared_informer.go:262] Caches are synced for node_authorizer I0113 01:10:35.274698 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0113 01:10:35.290783 1 cache.go:39] Caches are synced for autoregister controller I0113 01:10:35.294011 1 shared_informer.go:262] Caches are synced for crd-autoregister I0113 01:10:35.966296 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0113 01:10:36.173209 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0113 01:10:36.178015 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0113 01:10:36.178050 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0113 01:10:36.523593 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0113 01:10:36.558781 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0113 01:10:36.683166 1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0113 01:10:36.689316 1 lease.go:250] Resetting endpoints for master service "kubernetes" to [192.168.64.2] I0113 01:10:36.690448 1 controller.go:616] quota admission added evaluator for: endpoints I0113 01:10:36.697918 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io I0113 01:10:37.235050 1 controller.go:616] quota admission added evaluator for: serviceaccounts I0113 01:10:39.253018 1 controller.go:616] quota admission added evaluator for: deployments.apps I0113 01:10:39.261555 1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0113 01:10:39.270217 1 controller.go:616] quota admission added evaluator for: daemonsets.apps I0113 01:10:39.397053 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io I0113 01:10:50.695231 1 controller.go:616] quota admission added evaluator for: replicasets.apps I0113 01:10:50.891881 1 controller.go:616] quota admission added evaluator for: controllerrevisions.apps I0113 01:11:35.006740 1 alloc.go:327] "allocated clusterIPs" service="default/grid-svc" clusterIPs=map[IPv4:10.105.190.28] I0113 01:11:35.041744 1 alloc.go:327] "allocated clusterIPs" service="default/grid-hub-svc" clusterIPs=map[IPv4:10.103.137.63] I0113 01:14:47.519703 1 alloc.go:327] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs=map[IPv4:10.104.172.19] I0113 01:14:47.538656 1 alloc.go:327] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs=map[IPv4:10.106.185.185] I0113 01:14:47.570629 1 controller.go:616] quota admission added evaluator for: jobs.batch I0113 01:16:48.062934 1 alloc.go:327] "allocated clusterIPs" service="default/grid-svc" clusterIPs=map[IPv4:10.98.99.235] I0113 01:16:48.100218 1 alloc.go:327] "allocated clusterIPs" service="default/grid-hub-svc" clusterIPs=map[IPv4:10.103.233.237] I0113 01:35:41.615852 1 alloc.go:327] "allocated clusterIPs" service="default/grid-svc" clusterIPs=map[IPv4:10.108.238.83] I0113 01:35:41.635226 1 alloc.go:327] "allocated clusterIPs" service="default/grid-hub-svc" clusterIPs=map[IPv4:10.99.13.43] I0113 02:14:55.162703 1 trace.go:205] Trace[2027895446]: "DeltaFIFO Pop Process" ID:v1beta1.flowcontrol.apiserver.k8s.io,Depth:21,Reason:slow event handlers blocking the queue (13-Jan-2023 02:14:55.041) (total time: 114ms): Trace[2027895446]: [114.441151ms] [114.441151ms] END E0113 06:10:21.146910 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 06:10:21.152717 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 08:16:33.990257 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 08:16:33.999721 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 10:22:18.661276 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 10:22:18.663095 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 12:42:40.051642 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 12:42:40.063090 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 14:11:39.780287 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 14:11:39.780330 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 14:46:39.364533 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 14:46:39.365682 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 14:46:58.899169 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 14:46:58.925449 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 15:13:29.934794 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" E0113 15:13:29.951603 1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, service account token has expired]" I0113 16:21:41.803488 1 alloc.go:327] "allocated clusterIPs" service="default/node-svc" clusterIPs=map[IPv4:10.98.64.138] I0113 16:32:16.944210 1 alloc.go:327] "allocated clusterIPs" service="default/node-svc" clusterIPs=map[IPv4:10.111.13.16] I0113 18:17:35.043900 1 alloc.go:327] "allocated clusterIPs" service="default/grid-hub-svc" clusterIPs=map[IPv4:10.100.248.153] I0113 18:20:10.971000 1 alloc.go:327] "allocated clusterIPs" service="default/node-svc" clusterIPs=map[IPv4:10.111.213.162] I0113 19:47:36.771017 1 alloc.go:327] "allocated clusterIPs" service="default/node-svc" clusterIPs=map[IPv4:10.103.20.172] I0114 00:02:09.509298 1 alloc.go:327] "allocated clusterIPs" service="kube-system/registry" clusterIPs=map[IPv4:10.102.114.209] * * ==> kube-controller-manager [cf00a2a9118c] <== * E0113 14:11:39.783029 1 resource_quota_controller.go:417] failed to discover resources: Unauthorized W0113 14:46:39.367446 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "default/grid-hub-svc", retrying. Error: failed to update grid-hub-svc-rzjgz EndpointSlice for Service default/grid-hub-svc: Unauthorized I0113 14:46:39.370806 1 event.go:285] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"grid-hub-svc", UID:"d6620ab6-75ea-4440-ae58-ae535f1a7e38", APIVersion:"v1", ResourceVersion:"4330", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service default/grid-hub-svc: failed to update grid-hub-svc-rzjgz EndpointSlice for Service default/grid-hub-svc: Unauthorized I0113 14:46:39.373253 1 event.go:294] "Event occurred" object="default/grid-hub-svc" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint default/grid-hub-svc: Unauthorized" I0113 14:46:58.915863 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Warning" reason="ReplicaSetCreateError" message="Failed to create new replica set \"grid-node-chrome-66f4df5b94\": Unauthorized" I0113 14:46:58.924483 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-66f4df5b94 to 1" I0113 14:46:58.929126 1 event.go:294] "Event occurred" object="default/grid-node-chrome-66f4df5b94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: Unauthorized" E0113 14:46:58.957350 1 replica_set.go:550] sync "default/grid-node-chrome-66f4df5b94" failed with Unauthorized I0113 14:46:58.975234 1 event.go:294] "Event occurred" object="default/grid-node-chrome-66f4df5b94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-66f4df5b94-76kmc" I0113 14:48:06.596925 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-6b5d576b57 to 1" I0113 14:48:06.621704 1 event.go:294] "Event occurred" object="default/grid-node-chrome-6b5d576b57" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-6b5d576b57-hc2zs" I0113 14:48:08.642055 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set grid-node-chrome-66f4df5b94 to 0 from 1" I0113 14:48:08.660464 1 event.go:294] "Event occurred" object="default/grid-node-chrome-66f4df5b94" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: grid-node-chrome-66f4df5b94-76kmc" I0113 15:10:55.393397 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-9ffcb9654 to 1" I0113 15:10:55.407638 1 event.go:294] "Event occurred" object="default/grid-node-chrome-9ffcb9654" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-9ffcb9654-jc9g5" I0113 15:10:56.825660 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set grid-node-chrome-6b5d576b57 to 0 from 1" I0113 15:10:56.873452 1 event.go:294] "Event occurred" object="default/grid-node-chrome-6b5d576b57" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: grid-node-chrome-6b5d576b57-hc2zs" W0113 15:13:29.937467 1 garbagecollector.go:754] failed to discover preferred resources: Unauthorized E0113 15:13:29.953225 1 resource_quota_controller.go:417] failed to discover resources: Unauthorized I0113 16:32:16.928831 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-5857d5855c to 1" I0113 16:32:16.972017 1 event.go:294] "Event occurred" object="default/grid-node-chrome-5857d5855c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-5857d5855c-97jgj" I0113 18:10:00.107246 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-7477bdb5c to 1" I0113 18:10:00.158158 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7477bdb5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7477bdb5c-j8mdg" I0113 18:10:01.230323 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set grid-node-chrome-5857d5855c to 0 from 1" I0113 18:10:01.251470 1 event.go:294] "Event occurred" object="default/grid-node-chrome-5857d5855c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: grid-node-chrome-5857d5855c-97jgj" W0113 18:10:01.285628 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "default/node-svc", retrying. Error: EndpointSlice informer cache is out of date I0113 18:15:56.296863 1 event.go:294] "Event occurred" object="default/grid-hub" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-hub-8cbb5c87d to 1" I0113 18:15:56.315157 1 event.go:294] "Event occurred" object="default/grid-hub-8cbb5c87d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-hub-8cbb5c87d-rkhc7" I0113 18:17:35.024088 1 event.go:294] "Event occurred" object="default/grid-hub" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-hub-8cbb5c87d to 1" I0113 18:17:35.065443 1 event.go:294] "Event occurred" object="default/grid-hub-8cbb5c87d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-hub-8cbb5c87d-mmkc4" I0113 18:20:10.937236 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-7477bdb5c to 2" I0113 18:20:10.963342 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7477bdb5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7477bdb5c-h9n87" I0113 18:20:10.980905 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7477bdb5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7477bdb5c-k25gx" W0113 18:20:11.023302 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "default/node-svc", retrying. Error: EndpointSlice informer cache is out of date I0113 18:20:11.049772 1 event.go:294] "Event occurred" object="node-svc" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service default/node-svc: endpoints \"node-svc\" already exists" I0113 18:23:47.832517 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set grid-node-chrome-7477bdb5c to 1 from 2" I0113 18:23:47.859041 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7477bdb5c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: grid-node-chrome-7477bdb5c-k25gx" I0113 19:46:54.754068 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-7c7bbf69b5 to 1" I0113 19:46:54.790003 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7c7bbf69b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7c7bbf69b5-n4w5n" I0113 19:47:36.706564 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-5857d5855c to 1" I0113 19:47:36.721217 1 event.go:294] "Event occurred" object="default/grid-node-chrome-5857d5855c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-5857d5855c-69nxc" I0113 23:13:45.423169 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-6f956fd9c9 to 1" I0113 23:13:45.465123 1 event.go:294] "Event occurred" object="default/grid-node-chrome-6f956fd9c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-6f956fd9c9-m5n5j" I0113 23:17:26.100644 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set grid-node-chrome-6f956fd9c9 to 0 from 1" I0113 23:17:26.113311 1 event.go:294] "Event occurred" object="default/grid-node-chrome-6f956fd9c9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: grid-node-chrome-6f956fd9c9-m5n5j" I0113 23:17:26.179815 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-56c7d9474b to 1 from 0" I0113 23:17:26.216239 1 event.go:294] "Event occurred" object="default/grid-node-chrome-56c7d9474b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-56c7d9474b-779kl" W0113 23:23:04.879309 1 endpointslice_controller.go:306] Error syncing endpoint slices for service "default/node-svc", retrying. Error: EndpointSlice informer cache is out of date I0113 23:23:12.273413 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-56c7d9474b to 1" I0113 23:23:12.297623 1 event.go:294] "Event occurred" object="default/grid-node-chrome-56c7d9474b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-56c7d9474b-spmnt" I0113 23:41:30.524144 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-bc76bbf6f to 1" I0113 23:41:30.545670 1 event.go:294] "Event occurred" object="default/grid-node-chrome-bc76bbf6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-bc76bbf6f-z2pfh" I0113 23:56:37.951283 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-7df9bdb896 to 1" I0113 23:56:37.975864 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7df9bdb896" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7df9bdb896-cp82m" I0113 23:58:08.558614 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-7df9bdb896 to 1" I0113 23:58:08.575353 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7df9bdb896" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7df9bdb896-rj9mz" I0113 23:59:22.558921 1 event.go:294] "Event occurred" object="default/grid-node-chrome" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set grid-node-chrome-7df9bdb896 to 1" I0113 23:59:22.583825 1 event.go:294] "Event occurred" object="default/grid-node-chrome-7df9bdb896" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: grid-node-chrome-7df9bdb896-4cc75" I0114 00:02:09.481903 1 event.go:294] "Event occurred" object="kube-system/registry" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-2tlp8" I0114 00:02:09.652859 1 event.go:294] "Event occurred" object="kube-system/registry-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-hz5jg" * * ==> kube-proxy [e9a11cba64b7] <== * I0113 01:10:51.575291 1 node.go:163] Successfully retrieved node IP: 192.168.64.2 I0113 01:10:51.575606 1 server_others.go:138] "Detected node IP" address="192.168.64.2" I0113 01:10:51.575734 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0113 01:10:51.600386 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6 I0113 01:10:51.600458 1 server_others.go:206] "Using iptables Proxier" I0113 01:10:51.600511 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259" I0113 01:10:51.600773 1 server.go:661] "Version info" version="v1.25.3" I0113 01:10:51.601045 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0113 01:10:51.601480 1 config.go:317] "Starting service config controller" I0113 01:10:51.601531 1 shared_informer.go:255] Waiting for caches to sync for service config I0113 01:10:51.601564 1 config.go:226] "Starting endpoint slice config controller" I0113 01:10:51.601578 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config I0113 01:10:51.602059 1 config.go:444] "Starting node config controller" I0113 01:10:51.602440 1 shared_informer.go:255] Waiting for caches to sync for node config I0113 01:10:51.702677 1 shared_informer.go:262] Caches are synced for node config I0113 01:10:51.702677 1 shared_informer.go:262] Caches are synced for endpoint slice config I0113 01:10:51.702726 1 shared_informer.go:262] Caches are synced for service config * * ==> kube-scheduler [58573efc523c] <== * I0113 01:10:33.557069 1 serving.go:348] Generated self-signed cert in-memory W0113 01:10:35.227745 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0113 01:10:35.227900 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0113 01:10:35.228006 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous. W0113 01:10:35.228053 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0113 01:10:35.261873 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3" I0113 01:10:35.262047 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0113 01:10:35.263127 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259 I0113 01:10:35.263241 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0113 01:10:35.263561 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0113 01:10:35.263279 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" W0113 01:10:35.271465 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0113 01:10:35.271531 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0113 01:10:35.271576 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0113 01:10:35.271634 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0113 01:10:35.271675 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0113 01:10:35.271686 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0113 01:10:35.273276 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0113 01:10:35.273340 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0113 01:10:35.273427 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0113 01:10:35.273979 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0113 01:10:35.274269 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0113 01:10:35.274303 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0113 01:10:35.276543 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0113 01:10:35.276939 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0113 01:10:35.276580 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0113 01:10:35.277280 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0113 01:10:35.276623 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0113 01:10:35.277569 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0113 01:10:35.277769 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0113 01:10:35.277806 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0113 01:10:35.277783 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0113 01:10:35.277821 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0113 01:10:35.278068 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0113 01:10:35.279483 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0113 01:10:35.279994 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0113 01:10:35.280984 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0113 01:10:35.280029 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0113 01:10:35.281229 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0113 01:10:35.280103 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0113 01:10:35.281429 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0113 01:10:36.168752 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0113 01:10:36.168965 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0113 01:10:36.205543 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0113 01:10:36.205756 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0113 01:10:36.253780 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0113 01:10:36.254004 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0113 01:10:36.267812 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0113 01:10:36.268056 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope I0113 01:10:36.763744 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Journal begins at Fri 2023-01-13 01:10:05 UTC, ends at Sat 2023-01-14 00:20:14 UTC. -- Jan 14 00:13:52 minikube kubelet[2004]: E0114 00:13:52.401526 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:13:52 minikube kubelet[2004]: E0114 00:13:52.402207 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:14:05 minikube kubelet[2004]: E0114 00:14:05.400402 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:14:05 minikube kubelet[2004]: E0114 00:14:05.400495 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:14:18 minikube kubelet[2004]: E0114 00:14:18.402647 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:14:18 minikube kubelet[2004]: E0114 00:14:18.404063 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:14:29 minikube kubelet[2004]: E0114 00:14:29.407966 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:14:29 minikube kubelet[2004]: E0114 00:14:29.408545 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:14:44 minikube kubelet[2004]: E0114 00:14:44.400845 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:14:44 minikube kubelet[2004]: E0114 00:14:44.401660 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:14:55 minikube kubelet[2004]: E0114 00:14:55.403382 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:14:55 minikube kubelet[2004]: E0114 00:14:55.403476 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:15:09 minikube kubelet[2004]: E0114 00:15:09.404610 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:15:09 minikube kubelet[2004]: E0114 00:15:09.404844 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:15:21 minikube kubelet[2004]: E0114 00:15:21.401224 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:15:21 minikube kubelet[2004]: E0114 00:15:21.401344 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:15:35 minikube kubelet[2004]: E0114 00:15:35.402907 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:15:35 minikube kubelet[2004]: E0114 00:15:35.404120 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:15:50 minikube kubelet[2004]: E0114 00:15:50.401038 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:15:50 minikube kubelet[2004]: E0114 00:15:50.401559 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:16:03 minikube kubelet[2004]: E0114 00:16:03.401424 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:16:03 minikube kubelet[2004]: E0114 00:16:03.402112 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:16:16 minikube kubelet[2004]: E0114 00:16:16.402363 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:16:16 minikube kubelet[2004]: E0114 00:16:16.402993 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:16:29 minikube kubelet[2004]: E0114 00:16:29.405004 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:16:29 minikube kubelet[2004]: E0114 00:16:29.405294 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:16:43 minikube kubelet[2004]: E0114 00:16:43.402133 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:16:43 minikube kubelet[2004]: E0114 00:16:43.402777 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:16:56 minikube kubelet[2004]: E0114 00:16:56.401501 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:16:56 minikube kubelet[2004]: E0114 00:16:56.401602 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:17:07 minikube kubelet[2004]: E0114 00:17:07.401647 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:17:07 minikube kubelet[2004]: E0114 00:17:07.402657 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:17:20 minikube kubelet[2004]: E0114 00:17:20.400561 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:17:20 minikube kubelet[2004]: E0114 00:17:20.402975 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:17:33 minikube kubelet[2004]: E0114 00:17:33.403685 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:17:33 minikube kubelet[2004]: E0114 00:17:33.403834 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:17:46 minikube kubelet[2004]: E0114 00:17:46.402450 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:17:46 minikube kubelet[2004]: E0114 00:17:46.402560 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:18:00 minikube kubelet[2004]: E0114 00:18:00.400695 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:18:00 minikube kubelet[2004]: E0114 00:18:00.401301 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:18:14 minikube kubelet[2004]: E0114 00:18:14.410842 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:18:14 minikube kubelet[2004]: E0114 00:18:14.411470 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:18:25 minikube kubelet[2004]: E0114 00:18:25.402441 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:18:25 minikube kubelet[2004]: E0114 00:18:25.402558 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:18:36 minikube kubelet[2004]: E0114 00:18:36.400980 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:18:36 minikube kubelet[2004]: E0114 00:18:36.401098 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:18:48 minikube kubelet[2004]: E0114 00:18:48.400730 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:18:48 minikube kubelet[2004]: E0114 00:18:48.401149 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:19:01 minikube kubelet[2004]: E0114 00:19:01.403942 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:19:01 minikube kubelet[2004]: E0114 00:19:01.404764 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:19:14 minikube kubelet[2004]: E0114 00:19:14.402357 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:19:14 minikube kubelet[2004]: E0114 00:19:14.404514 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:19:26 minikube kubelet[2004]: E0114 00:19:26.402588 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:19:26 minikube kubelet[2004]: E0114 00:19:26.403590 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:19:40 minikube kubelet[2004]: E0114 00:19:40.400947 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:19:40 minikube kubelet[2004]: E0114 00:19:40.401043 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:19:53 minikube kubelet[2004]: E0114 00:19:53.401367 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:19:53 minikube kubelet[2004]: E0114 00:19:53.401481 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 Jan 14 00:20:08 minikube kubelet[2004]: E0114 00:20:08.402639 2004 kuberuntime_manager.go:862] container &Container{Name:node-proxy,Image:node-proxy:0.1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnh9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Never,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod grid-node-chrome-7df9bdb896-4cc75_default(e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881): ErrImageNeverPull: Container image "node-proxy:0.1" is not present with pull policy of Never Jan 14 00:20:08 minikube kubelet[2004]: E0114 00:20:08.404000 2004 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-proxy\" with ErrImageNeverPull: \"Container image \\\"node-proxy:0.1\\\" is not present with pull policy of Never\"" pod="default/grid-node-chrome-7df9bdb896-4cc75" podUID=e08b7eb5-3a27-4cf4-a5e4-dcc5f1d3f881 * * ==> storage-provisioner [45769dcfc531] <== * I0113 01:11:21.963028 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0113 01:11:21.971274 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0113 01:11:21.971298 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0113 01:11:21.977608 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0113 01:11:21.978196 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_7be74422-4be6-4c9b-b3ba-fd980af39ed2! I0113 01:11:21.982129 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db155abd-57ba-4d38-89df-89b3272e8f72", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_7be74422-4be6-4c9b-b3ba-fd980af39ed2 became leader I0113 01:11:22.080083 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_7be74422-4be6-4c9b-b3ba-fd980af39ed2! * * ==> storage-provisioner [bf20f17cbb06] <== * I0113 01:10:51.582046 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0113 01:11:21.586421 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout