* * ==> Audit <== * |---------|------|----------|-----------------------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------|----------|-----------------------|---------|-------------------------------|-------------------------------| | start | | minikube | DESKTOP-QQP8ITG\fouqu | v1.25.2 | Sat, 26 Mar 2022 17:56:11 CET | Sat, 26 Mar 2022 17:57:02 CET | |---------|------|----------|-----------------------|---------|-------------------------------|-------------------------------| * * ==> Dernier démarrage <== * Log file created at: 2022/03/26 17:56:11 Running on machine: DESKTOP-QQP8ITG Binary: Built with gc go1.17.7 for windows/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0326 17:56:11.682988 35008 out.go:297] Setting OutFile to fd 88 ... I0326 17:56:11.705988 35008 out.go:344] TERM=,COLORTERM=, which probably does not support color I0326 17:56:11.705988 35008 out.go:310] Setting ErrFile to fd 92... I0326 17:56:11.705988 35008 out.go:344] TERM=,COLORTERM=, which probably does not support color W0326 17:56:11.717488 35008 root.go:293] Error reading config file at C:\Users\fouqu\.minikube\config\config.json: open C:\Users\fouqu\.minikube\config\config.json: The system cannot find the path specified. I0326 17:56:11.719489 35008 out.go:304] Setting JSON to false I0326 17:56:11.725489 35008 start.go:112] hostinfo: {"hostname":"DESKTOP-QQP8ITG","uptime":4530,"bootTime":1648309241,"procs":363,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"17f82113-7bac-49d6-8d21-3e79a61ddce0"} W0326 17:56:11.725489 35008 start.go:120] gopshost.Virtualization returned error: not implemented yet I0326 17:56:11.726489 35008 out.go:176] * minikube v1.25.2 sur Microsoft Windows 10 Pro 10.0.19044 Build 19044 I0326 17:56:11.726489 35008 notify.go:193] Checking for updates... W0326 17:56:11.726489 35008 preload.go:295] Failed to list preload files: open C:\Users\fouqu\.minikube\cache\preloaded-tarball: The system cannot find the file specified. I0326 17:56:11.726989 35008 driver.go:344] Setting default libvirt URI to qemu:///system I0326 17:56:11.726989 35008 global.go:111] Querying for installed drivers using PATH=C:\Program Files\Microsoft\jdk-11.0.12.7-hotspot\bin;C:\Python39\Scripts\;C:\Python39\;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\Program Files (x86)\Razer\ChromaBroadcast\bin;C:\Program Files\Razer\ChromaBroadcast\bin;C:\Program Files (x86)\Razer Chroma SDK\bin;C:\Program Files\Razer Chroma SDK\bin;C:\ProgramData\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\ProgramData\chocolatey\bin;;C:\Program Files\Azure Data Studio\bin;C:\WINDOWS\system32\config\systemprofile\AppData\Local\Microsoft\WindowsApps;C:\Users\fouqu\AppData\Local\Microsoft\WindowsApps;C:\Users\fouqu\.dotnet\tools;C:\Program Files\MongoDB\Tools\100\bin;C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\Git\mingw32\bin;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Microsoft SQL Server\150\DTS\Binn\;C:\Program Files\PuTTY\;C:\Program Files\Microsoft\Azure Functions Core Tools\;C:\Program Files\Microsoft SQL Server\150\Tools\Binn\;C:\Program Files\Docker\Docker\resources\bin;C:\ProgramData\DockerDesktop\version-bin;C:\Program Files (x86)\Notepad++;C:\Users\fouqu\AppData\Local\Programs\Microsoft VS Code\bin;C:\Users\fouqu\Downloads\Redis-x64-3.0.504;C:\Program Files\OpenSSL-Win64\bin I0326 17:56:11.762989 35008 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0326 17:56:11.780489 35008 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/ Version:} I0326 17:56:12.190024 35008 docker.go:132] docker version: linux-20.10.13 I0326 17:56:12.203989 35008 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0326 17:56:12.825006 35008 info.go:263] docker info: {ID:6OTP:DAKG:3MWV:J3V5:L5SZ:GQWX:XN5Z:J2JH:FADO:X272:RLFG:VWCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-03-26 16:56:11.6224209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.4.72-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26832408576 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0326 17:56:12.825006 35008 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0326 17:56:13.383035 35008 global.go:119] hyperv default: true priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0326 17:56:13.399535 35008 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0326 17:56:13.399535 35008 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0326 17:56:13.399535 35008 driver.go:279] not recommending "ssh" due to default: false I0326 17:56:13.399535 35008 driver.go:314] Picked: docker I0326 17:56:13.399535 35008 driver.go:315] Alternatives: [hyperv ssh] I0326 17:56:13.399535 35008 driver.go:316] Rejects: [virtualbox vmware podman] I0326 17:56:13.401535 35008 out.go:176] * Choix automatique du pilote docker. Autres choix: hyperv, ssh I0326 17:56:13.402035 35008 start.go:281] selected driver: docker I0326 17:56:13.402035 35008 start.go:798] validating driver "docker" against I0326 17:56:13.402035 35008 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0326 17:56:13.433035 35008 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0326 17:56:14.003052 35008 info.go:263] docker info: {ID:6OTP:DAKG:3MWV:J3V5:L5SZ:GQWX:XN5Z:J2JH:FADO:X272:RLFG:VWCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-03-26 16:56:12.8190674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.4.72-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26832408576 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0326 17:56:14.003052 35008 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0326 17:56:14.021535 35008 start_flags.go:369] Using suggested 8100MB memory alloc based on sys=32678MB, container=25589MB I0326 17:56:14.022035 35008 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0326 17:56:14.022035 35008 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true] I0326 17:56:14.022035 35008 cni.go:93] Creating CNI manager for "" I0326 17:56:14.022035 35008 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0326 17:56:14.022035 35008 start_flags.go:302] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\fouqu:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0326 17:56:14.023535 35008 out.go:176] * Démarrage du noeud de plan de contrôle minikube dans le cluster minikube I0326 17:56:14.023535 35008 cache.go:120] Beginning downloading kic base image for docker with docker I0326 17:56:14.024037 35008 out.go:176] * Extraction de l'image de base... I0326 17:56:14.024037 35008 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0326 17:56:14.024037 35008 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0326 17:56:14.148596 35008 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 I0326 17:56:14.148596 35008 cache.go:57] Caching tarball of preloaded images I0326 17:56:14.148596 35008 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0326 17:56:14.149035 35008 out.go:176] * Téléchargement du préchargement de Kubernetes v1.23.3... I0326 17:56:14.149536 35008 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ... I0326 17:56:14.309036 35008 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.3/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4?checksum=md5:1c52b21a02ef67e2e4434a0c47aabce7 -> C:\Users\fouqu\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 I0326 17:56:14.499035 35008 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0326 17:56:14.499035 35008 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0326 17:56:26.268663 35008 preload.go:249] saving checksum for preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ... I0326 17:56:26.270662 35008 preload.go:256] verifying checksumm of C:\Users\fouqu\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 ... I0326 17:56:27.001161 35008 cache.go:60] Finished verifying existence of preloaded tar for v1.23.3 on docker I0326 17:56:27.001662 35008 profile.go:148] Saving config to C:\Users\fouqu\.minikube\profiles\minikube\config.json ... I0326 17:56:27.001662 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\config.json: {Name:mkb6923a64242880f31db5ecc3c158a8c5c0c79a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:27.004162 35008 cache.go:208] Successfully downloaded all kic artifacts I0326 17:56:27.004162 35008 start.go:348] acquiring machines lock for minikube: {Name:mk23903f141e1cc17540db5b97bd4e67a89443aa Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0326 17:56:27.004162 35008 start.go:352] acquired machines lock for "minikube" in 0s I0326 17:56:27.004662 35008 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\fouqu:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0326 17:56:27.004662 35008 start.go:127] createHost starting for "" (driver="docker") I0326 17:56:27.006162 35008 out.go:203] * Création de docker container (CPUs=2, Memory=8100Mo) ... I0326 17:56:27.006662 35008 start.go:161] libmachine.API.Create for "minikube" (driver="docker") I0326 17:56:27.006662 35008 client.go:168] LocalClient.Create starting I0326 17:56:27.007162 35008 main.go:130] libmachine: Creating CA: C:\Users\fouqu\.minikube\certs\ca.pem I0326 17:56:27.066663 35008 main.go:130] libmachine: Creating client certificate: C:\Users\fouqu\.minikube\certs\cert.pem I0326 17:56:27.165161 35008 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0326 17:56:27.570180 35008 cli_runner.go:180] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0326 17:56:27.584662 35008 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs... I0326 17:56:27.584662 35008 cli_runner.go:133] Run: docker network inspect minikube W0326 17:56:27.973187 35008 cli_runner.go:180] docker network inspect minikube returned with exit code 1 I0326 17:56:27.973187 35008 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0326 17:56:27.973187 35008 network_create.go:259] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0326 17:56:27.987164 35008 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0326 17:56:28.374163 35008 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006722d8] misses:0} I0326 17:56:28.374163 35008 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0326 17:56:28.374163 35008 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0326 17:56:28.388662 35008 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0326 17:56:28.918703 35008 network_create.go:90] docker network minikube 192.168.49.0/24 created I0326 17:56:28.918703 35008 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0326 17:56:28.946662 35008 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0326 17:56:29.475162 35008 cli_runner.go:133] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0326 17:56:29.895162 35008 oci.go:102] Successfully created a docker volume minikube I0326 17:56:29.909163 35008 cli_runner.go:133] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0326 17:56:32.338254 35008 cli_runner.go:186] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib: (2.4290909s) I0326 17:56:32.338254 35008 oci.go:106] Successfully prepared a docker volume minikube I0326 17:56:32.338254 35008 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0326 17:56:32.338254 35008 kic.go:179] Starting extracting preloaded images to volume ... I0326 17:56:32.354256 35008 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\fouqu\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0326 17:56:37.834559 35008 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\fouqu\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (5.4803028s) I0326 17:56:37.834559 35008 kic.go:188] duration metric: took 5.496305 seconds to extract preloaded images to volume I0326 17:56:37.848556 35008 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0326 17:56:38.454591 35008 info.go:263] docker info: {ID:6OTP:DAKG:3MWV:J3V5:L5SZ:GQWX:XN5Z:J2JH:FADO:X272:RLFG:VWCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:46 SystemTime:2022-03-26 16:56:37.2728664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.4.72-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:26832408576 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.3.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0326 17:56:38.468556 35008 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0326 17:56:39.073056 35008 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8100mb --memory-swap=8100mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0326 17:56:39.973056 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Running}} I0326 17:56:40.390556 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0326 17:56:40.793556 35008 cli_runner.go:133] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0326 17:56:41.244577 35008 oci.go:281] the created container "minikube" has a running status. I0326 17:56:41.244577 35008 kic.go:210] Creating ssh key for kic: C:\Users\fouqu\.minikube\machines\minikube\id_rsa... I0326 17:56:41.298056 35008 kic_runner.go:191] docker (temp): C:\Users\fouqu\.minikube\machines\minikube\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0326 17:56:41.770556 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0326 17:56:42.191058 35008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0326 17:56:42.191058 35008 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0326 17:56:42.631057 35008 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\fouqu\.minikube\machines\minikube\id_rsa... E0326 17:56:42.933898 35008 kic.go:267] icacls failed applying permissions - err - [%!!(MISSING)s()], output - [fichier trait‚ÿ: C:\Users\fouqu\.minikube\machines\minikube\id_rsa 1 fichiers correctement trait‚sÿ; ‚chec du traitement de 0 fichiers] I0326 17:56:42.963367 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0326 17:56:43.367866 35008 machine.go:88] provisioning docker machine ... I0326 17:56:43.367866 35008 ubuntu.go:169] provisioning hostname "minikube" I0326 17:56:43.382369 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:43.730867 35008 main.go:130] libmachine: Using SSH client type: native I0326 17:56:43.732368 35008 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xe3bfa0] 0xe3ee60 [] 0s} 127.0.0.1 60565 } I0326 17:56:43.732368 35008 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0326 17:56:43.800878 35008 main.go:130] libmachine: SSH cmd err, output: : minikube I0326 17:56:43.814879 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:44.211877 35008 main.go:130] libmachine: Using SSH client type: native I0326 17:56:44.211877 35008 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xe3bfa0] 0xe3ee60 [] 0s} 127.0.0.1 60565 } I0326 17:56:44.211877 35008 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0326 17:56:44.275258 35008 main.go:130] libmachine: SSH cmd err, output: : I0326 17:56:44.275258 35008 ubuntu.go:175] set auth options {CertDir:C:\Users\fouqu\.minikube CaCertPath:C:\Users\fouqu\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\fouqu\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\fouqu\.minikube\machines\server.pem ServerKeyPath:C:\Users\fouqu\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\fouqu\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\fouqu\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\fouqu\.minikube} I0326 17:56:44.275258 35008 ubuntu.go:177] setting up certificates I0326 17:56:44.275258 35008 provision.go:83] configureAuth start I0326 17:56:44.290258 35008 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0326 17:56:44.669757 35008 provision.go:138] copyHostCerts I0326 17:56:44.669757 35008 exec_runner.go:151] cp: C:\Users\fouqu\.minikube\certs\ca.pem --> C:\Users\fouqu\.minikube/ca.pem (1074 bytes) I0326 17:56:44.672258 35008 exec_runner.go:151] cp: C:\Users\fouqu\.minikube\certs\cert.pem --> C:\Users\fouqu\.minikube/cert.pem (1119 bytes) I0326 17:56:44.673757 35008 exec_runner.go:151] cp: C:\Users\fouqu\.minikube\certs\key.pem --> C:\Users\fouqu\.minikube/key.pem (1675 bytes) I0326 17:56:44.675258 35008 provision.go:112] generating server cert: C:\Users\fouqu\.minikube\machines\server.pem ca-key=C:\Users\fouqu\.minikube\certs\ca.pem private-key=C:\Users\fouqu\.minikube\certs\ca-key.pem org=fouqu.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0326 17:56:44.824257 35008 provision.go:172] copyRemoteCerts I0326 17:56:44.841258 35008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0326 17:56:44.854757 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:45.212258 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:56:45.247757 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1074 bytes) I0326 17:56:45.256757 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes) I0326 17:56:45.265375 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0326 17:56:45.274257 35008 provision.go:86] duration metric: configureAuth took 998.9993ms I0326 17:56:45.274257 35008 ubuntu.go:193] setting minikube options for container-runtime I0326 17:56:45.274758 35008 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0326 17:56:45.288758 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:45.652757 35008 main.go:130] libmachine: Using SSH client type: native I0326 17:56:45.653258 35008 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xe3bfa0] 0xe3ee60 [] 0s} 127.0.0.1 60565 } I0326 17:56:45.653258 35008 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0326 17:56:45.716289 35008 main.go:130] libmachine: SSH cmd err, output: : overlay I0326 17:56:45.716289 35008 ubuntu.go:71] root file system type: overlay I0326 17:56:45.716289 35008 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0326 17:56:45.731288 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:46.133290 35008 main.go:130] libmachine: Using SSH client type: native I0326 17:56:46.133791 35008 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xe3bfa0] 0xe3ee60 [] 0s} 127.0.0.1 60565 } I0326 17:56:46.133791 35008 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0326 17:56:46.201593 35008 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0326 17:56:46.215592 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:46.706592 35008 main.go:130] libmachine: Using SSH client type: native I0326 17:56:46.707093 35008 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0xe3bfa0] 0xe3ee60 [] 0s} 127.0.0.1 60565 } I0326 17:56:46.707093 35008 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0326 17:56:47.034930 35008 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-03-26 16:56:45.260000000 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0326 17:56:47.034930 35008 machine.go:91] provisioned docker machine in 3.6670639s I0326 17:56:47.034930 35008 client.go:171] LocalClient.Create took 20.0282681s I0326 17:56:47.035430 35008 start.go:169] duration metric: libmachine.API.Create for "minikube" took 20.0287677s I0326 17:56:47.035430 35008 start.go:302] post-start starting for "minikube" (driver="docker") I0326 17:56:47.035430 35008 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0326 17:56:47.051930 35008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0326 17:56:47.065430 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:47.458930 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:56:47.551440 35008 ssh_runner.go:195] Run: cat /etc/os-release I0326 17:56:47.553439 35008 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0326 17:56:47.553439 35008 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0326 17:56:47.553439 35008 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0326 17:56:47.553439 35008 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0326 17:56:47.553439 35008 filesync.go:126] Scanning C:\Users\fouqu\.minikube\addons for local assets ... I0326 17:56:47.553940 35008 filesync.go:126] Scanning C:\Users\fouqu\.minikube\files for local assets ... I0326 17:56:47.553940 35008 start.go:305] post-start completed in 518.5105ms I0326 17:56:47.570441 35008 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0326 17:56:47.954939 35008 profile.go:148] Saving config to C:\Users\fouqu\.minikube\profiles\minikube\config.json ... I0326 17:56:47.976939 35008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0326 17:56:47.992939 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:48.419939 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:56:48.470439 35008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0326 17:56:48.472940 35008 start.go:130] duration metric: createHost completed in 21.4682775s I0326 17:56:48.472940 35008 start.go:81] releasing machines lock for "minikube", held for 21.4687782s I0326 17:56:48.486941 35008 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0326 17:56:48.905439 35008 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0326 17:56:48.917942 35008 ssh_runner.go:195] Run: systemctl --version I0326 17:56:48.921940 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:48.934940 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:56:49.318939 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:56:49.349959 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:56:49.517509 35008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0326 17:56:49.539508 35008 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0326 17:56:49.545509 35008 cruntime.go:272] skipping containerd shutdown because we are bound to it I0326 17:56:49.561508 35008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0326 17:56:49.567076 35008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0326 17:56:49.589576 35008 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0326 17:56:49.635076 35008 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0326 17:56:49.681077 35008 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0326 17:56:49.703077 35008 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0326 17:56:49.749578 35008 ssh_runner.go:195] Run: sudo systemctl start docker I0326 17:56:49.769578 35008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0326 17:56:49.806075 35008 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0326 17:56:49.827076 35008 out.go:203] * Préparation de Kubernetes v1.23.3 sur Docker 20.10.12... I0326 17:56:49.842075 35008 cli_runner.go:133] Run: docker exec -t minikube dig +short host.docker.internal I0326 17:56:51.146991 35008 cli_runner.go:186] Completed: docker exec -t minikube dig +short host.docker.internal: (1.3049155s) I0326 17:56:51.146991 35008 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0326 17:56:51.163491 35008 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0326 17:56:51.165992 35008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0326 17:56:51.185493 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0326 17:56:51.627491 35008 out.go:176] - kubelet.housekeeping-interval=5m I0326 17:56:51.627992 35008 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0326 17:56:51.641993 35008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0326 17:56:51.660492 35008 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0326 17:56:51.660492 35008 docker.go:537] Images already preloaded, skipping extraction I0326 17:56:51.674993 35008 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0326 17:56:51.691492 35008 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0326 17:56:51.691492 35008 cache_images.go:84] Images are preloaded, skipping loading I0326 17:56:51.705993 35008 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0326 17:56:51.748491 35008 cni.go:93] Creating CNI manager for "" I0326 17:56:51.748491 35008 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0326 17:56:51.748491 35008 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0326 17:56:51.748491 35008 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0326 17:56:51.748491 35008 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0326 17:56:51.748491 35008 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0326 17:56:51.765492 35008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3 I0326 17:56:51.769991 35008 binaries.go:44] Found k8s binaries, skipping transfer I0326 17:56:51.786491 35008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0326 17:56:51.790492 35008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes) I0326 17:56:51.796991 35008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0326 17:56:51.803493 35008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0326 17:56:51.826992 35008 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0326 17:56:51.828994 35008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0326 17:56:51.833992 35008 certs.go:54] Setting up C:\Users\fouqu\.minikube\profiles\minikube for IP: 192.168.49.2 I0326 17:56:51.833992 35008 certs.go:187] generating minikubeCA CA: C:\Users\fouqu\.minikube\ca.key I0326 17:56:51.894991 35008 crypto.go:156] Writing cert to C:\Users\fouqu\.minikube\ca.crt ... I0326 17:56:51.894991 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\ca.crt: {Name:mk788bd9f85baf3cd422c7f4b20e1481b61f08bd Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:51.896992 35008 crypto.go:164] Writing key to C:\Users\fouqu\.minikube\ca.key ... I0326 17:56:51.896992 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\ca.key: {Name:mkfda769ce0e2323eadca926c9018ca25ded324a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:51.898491 35008 certs.go:187] generating proxyClientCA CA: C:\Users\fouqu\.minikube\proxy-client-ca.key I0326 17:56:51.947494 35008 crypto.go:156] Writing cert to C:\Users\fouqu\.minikube\proxy-client-ca.crt ... I0326 17:56:51.947494 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\proxy-client-ca.crt: {Name:mk998c1aca990d4fdea4c303a1e139bee7e20124 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:51.949491 35008 crypto.go:164] Writing key to C:\Users\fouqu\.minikube\proxy-client-ca.key ... I0326 17:56:51.949491 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\proxy-client-ca.key: {Name:mk5dc404a075fe3a9068da40d46856b8c09c2823 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:51.951492 35008 certs.go:302] generating minikube-user signed cert: C:\Users\fouqu\.minikube\profiles\minikube\client.key I0326 17:56:51.951492 35008 crypto.go:68] Generating cert C:\Users\fouqu\.minikube\profiles\minikube\client.crt with IP's: [] I0326 17:56:52.119991 35008 crypto.go:156] Writing cert to C:\Users\fouqu\.minikube\profiles\minikube\client.crt ... I0326 17:56:52.119991 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\client.crt: {Name:mk24e23c9b61c847f8b664ebbbf83bf4e03998d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:52.121991 35008 crypto.go:164] Writing key to C:\Users\fouqu\.minikube\profiles\minikube\client.key ... I0326 17:56:52.121991 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\client.key: {Name:mk0fbd89506f8ae832ecbded28ace7846e8e8c5e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:52.123493 35008 certs.go:302] generating minikube signed cert: C:\Users\fouqu\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 I0326 17:56:52.123991 35008 crypto.go:68] Generating cert C:\Users\fouqu\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0326 17:56:52.311490 35008 crypto.go:156] Writing cert to C:\Users\fouqu\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 ... I0326 17:56:52.311490 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2: {Name:mkaf8d3a6beb23a73c8dd9a06592e572eacd1f2a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:52.313491 35008 crypto.go:164] Writing key to C:\Users\fouqu\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 ... I0326 17:56:52.313491 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\apiserver.key.dd3b5fb2: {Name:mkeca8c640b4110cfce8c5950774a8d041e88923 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:52.314491 35008 certs.go:320] copying C:\Users\fouqu\.minikube\profiles\minikube\apiserver.crt.dd3b5fb2 -> C:\Users\fouqu\.minikube\profiles\minikube\apiserver.crt I0326 17:56:52.315994 35008 certs.go:324] copying C:\Users\fouqu\.minikube\profiles\minikube\apiserver.key.dd3b5fb2 -> C:\Users\fouqu\.minikube\profiles\minikube\apiserver.key I0326 17:56:52.317491 35008 certs.go:302] generating aggregator signed cert: C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.key I0326 17:56:52.317491 35008 crypto.go:68] Generating cert C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.crt with IP's: [] I0326 17:56:52.481992 35008 crypto.go:156] Writing cert to C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.crt ... I0326 17:56:52.481992 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.crt: {Name:mkfbdda22d6861a64d708c113c196e47ad6cf275 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:52.483992 35008 crypto.go:164] Writing key to C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.key ... I0326 17:56:52.483992 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.key: {Name:mk3e65240d636dca6723de49ab48cca1c4c222a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:56:52.486997 35008 certs.go:388] found cert: C:\Users\fouqu\.minikube\certs\C:\Users\fouqu\.minikube\certs\ca-key.pem (1675 bytes) I0326 17:56:52.486997 35008 certs.go:388] found cert: C:\Users\fouqu\.minikube\certs\C:\Users\fouqu\.minikube\certs\ca.pem (1074 bytes) I0326 17:56:52.486997 35008 certs.go:388] found cert: C:\Users\fouqu\.minikube\certs\C:\Users\fouqu\.minikube\certs\cert.pem (1119 bytes) I0326 17:56:52.486997 35008 certs.go:388] found cert: C:\Users\fouqu\.minikube\certs\C:\Users\fouqu\.minikube\certs\key.pem (1675 bytes) I0326 17:56:52.487991 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0326 17:56:52.498492 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0326 17:56:52.508992 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0326 17:56:52.518491 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0326 17:56:52.527492 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0326 17:56:52.536490 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0326 17:56:52.544991 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0326 17:56:52.553490 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0326 17:56:52.561991 35008 ssh_runner.go:362] scp C:\Users\fouqu\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0326 17:56:52.570492 35008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0326 17:56:52.593991 35008 ssh_runner.go:195] Run: openssl version I0326 17:56:52.613990 35008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0326 17:56:52.635490 35008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0326 17:56:52.637990 35008 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 26 2022 /usr/share/ca-certificates/minikubeCA.pem I0326 17:56:52.654491 35008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0326 17:56:52.673991 35008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0326 17:56:52.678491 35008 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:8100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\fouqu:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0326 17:56:52.692491 35008 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0326 17:56:52.725991 35008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0326 17:56:52.747491 35008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0326 17:56:52.751491 35008 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0326 17:56:52.767992 35008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0326 17:56:52.771991 35008 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0326 17:56:52.771991 35008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0326 17:57:00.366369 35008 out.go:203] - Génération des certificats et des clés I0326 17:57:00.367869 35008 out.go:203] - Démarrage du plan de contrôle ... I0326 17:57:00.369368 35008 out.go:203] - Configuration des règles RBAC ... I0326 17:57:00.370868 35008 cni.go:93] Creating CNI manager for "" I0326 17:57:00.370868 35008 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0326 17:57:00.370868 35008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0326 17:57:00.377368 35008 ops.go:34] apiserver oom_adj: -16 I0326 17:57:00.389370 35008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0326 17:57:00.389370 35008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=597367675a06de09f6c1768f8c07b7f7c1e6101a minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_03_26T17_57_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0326 17:57:00.491171 35008 kubeadm.go:1020] duration metric: took 120.3037ms to wait for elevateKubeSystemPrivileges. I0326 17:57:00.491171 35008 kubeadm.go:393] StartCluster complete in 7.8126805s I0326 17:57:00.491171 35008 settings.go:142] acquiring lock: {Name:mk32e1881ed831f0e967693cb3df71eae268dc1c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:57:00.491171 35008 settings.go:150] Updating kubeconfig: C:\Users\fouqu\.kube\config I0326 17:57:00.492671 35008 lock.go:35] WriteFile acquiring C:\Users\fouqu\.kube\config: {Name:mkb2468fca449c5ac45bc923ae91614530285dd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0326 17:57:01.009689 35008 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0326 17:57:01.009689 35008 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0326 17:57:01.009689 35008 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0326 17:57:01.009689 35008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0326 17:57:01.009950 35008 out.go:176] * Vérification des composants Kubernetes... I0326 17:57:01.009950 35008 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0326 17:57:01.009950 35008 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0326 17:57:01.009950 35008 addons.go:65] Setting default-storageclass=true in profile "minikube" I0326 17:57:01.009950 35008 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0326 17:57:01.009950 35008 addons.go:165] addon storage-provisioner should already be in state true I0326 17:57:01.009950 35008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0326 17:57:01.010451 35008 host.go:66] Checking if "minikube" exists ... I0326 17:57:01.032952 35008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0326 17:57:01.039950 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0326 17:57:01.040450 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0326 17:57:01.046452 35008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0326 17:57:01.067450 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0326 17:57:01.415453 35008 addons.go:153] Setting addon default-storageclass=true in "minikube" W0326 17:57:01.415453 35008 addons.go:165] addon default-storageclass should already be in state true I0326 17:57:01.415953 35008 host.go:66] Checking if "minikube" exists ... I0326 17:57:01.426951 35008 out.go:176] - Utilisation de l'image gcr.io/k8s-minikube/storage-provisioner:v5 I0326 17:57:01.426951 35008 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0326 17:57:01.426951 35008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0326 17:57:01.449950 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:57:01.464951 35008 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0326 17:57:01.520951 35008 api_server.go:51] waiting for apiserver process to appear ... I0326 17:57:01.540949 35008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0326 17:57:01.628450 35008 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS I0326 17:57:01.628450 35008 api_server.go:71] duration metric: took 618.7606ms to wait for apiserver process to appear ... I0326 17:57:01.628450 35008 api_server.go:87] waiting for apiserver healthz status ... I0326 17:57:01.628450 35008 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60569/healthz ... I0326 17:57:01.632451 35008 api_server.go:266] https://127.0.0.1:60569/healthz returned 200: ok I0326 17:57:01.633449 35008 api_server.go:140] control plane version: v1.23.3 I0326 17:57:01.633449 35008 api_server.go:130] duration metric: took 4.9998ms to wait for apiserver health ... I0326 17:57:01.633449 35008 system_pods.go:43] waiting for kube-system pods to appear ... I0326 17:57:01.637450 35008 system_pods.go:59] 4 kube-system pods found I0326 17:57:01.637450 35008 system_pods.go:61] "etcd-minikube" [be15abe9-2525-4d01-9494-7afac9d3bed4] Pending I0326 17:57:01.637450 35008 system_pods.go:61] "kube-apiserver-minikube" [9838e8b4-f00a-43c5-9cbc-d32239a70093] Pending I0326 17:57:01.637450 35008 system_pods.go:61] "kube-controller-manager-minikube" [92d348eb-7ab0-40ad-864f-3b637675aa22] Pending I0326 17:57:01.637450 35008 system_pods.go:61] "kube-scheduler-minikube" [4c855aa3-d1cd-4f9e-9f34-d690b55465e9] Pending I0326 17:57:01.637450 35008 system_pods.go:74] duration metric: took 4.0009ms to wait for pod list to return data ... I0326 17:57:01.637450 35008 kubeadm.go:548] duration metric: took 627.7613ms to wait for : map[apiserver:true system_pods:true] ... I0326 17:57:01.637450 35008 node_conditions.go:102] verifying NodePressure condition ... I0326 17:57:01.638951 35008 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki I0326 17:57:01.638951 35008 node_conditions.go:123] node cpu capacity is 16 I0326 17:57:01.638951 35008 node_conditions.go:105] duration metric: took 1.5005ms to run NodePressure ... I0326 17:57:01.638951 35008 start.go:213] waiting for startup goroutines ... I0326 17:57:01.828950 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:57:01.837950 35008 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0326 17:57:01.837950 35008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0326 17:57:01.852450 35008 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0326 17:57:01.936450 35008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0326 17:57:02.232451 35008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60565 SSHKeyPath:C:\Users\fouqu\.minikube\machines\minikube\id_rsa Username:docker} I0326 17:57:02.287950 35008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0326 17:57:02.345418 35008 out.go:176] * Modules activés: storage-provisioner, default-storageclass I0326 17:57:02.345418 35008 addons.go:417] enableAddons completed in 1.3357286s I0326 17:57:02.449447 35008 start.go:496] kubectl: 1.23.3, cluster: 1.23.3 (minor skew: 0) I0326 17:57:02.449918 35008 out.go:176] * Terminé ! kubectl est maintenant configuré pour utiliser "minikube" cluster et espace de noms "default" par défaut. * * ==> Docker <== * -- Logs begin at Sat 2022-03-26 16:56:39 UTC, end at Sat 2022-03-26 16:58:49 UTC. -- Mar 26 16:56:39 minikube systemd[1]: Starting Docker Application Container Engine... Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.146511200Z" level=info msg="Starting up" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.147426500Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.147444500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.147460600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.147467600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.148238400Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.148257200Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.148265900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.148270400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.151231700Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164425400Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164442500Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164446600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164450600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164453400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164456000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.164551700Z" level=info msg="Loading containers: start." Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.190703800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.207324500Z" level=info msg="Loading containers: done." Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.214556900Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.214625300Z" level=info msg="Daemon has completed initialization" Mar 26 16:56:39 minikube systemd[1]: Started Docker Application Container Engine. Mar 26 16:56:39 minikube dockerd[212]: time="2022-03-26T16:56:39.235422700Z" level=info msg="API listen on /run/docker.sock" Mar 26 16:56:45 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Mar 26 16:56:45 minikube systemd[1]: Stopping Docker Application Container Engine... Mar 26 16:56:45 minikube dockerd[212]: time="2022-03-26T16:56:45.996824000Z" level=info msg="Processing signal 'terminated'" Mar 26 16:56:45 minikube dockerd[212]: time="2022-03-26T16:56:45.997454700Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Mar 26 16:56:45 minikube dockerd[212]: time="2022-03-26T16:56:45.997644500Z" level=info msg="Daemon shutdown complete" Mar 26 16:56:45 minikube dockerd[212]: time="2022-03-26T16:56:45.997703100Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby Mar 26 16:56:45 minikube systemd[1]: docker.service: Succeeded. Mar 26 16:56:45 minikube systemd[1]: Stopped Docker Application Container Engine. Mar 26 16:56:45 minikube systemd[1]: Starting Docker Application Container Engine... Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.019648600Z" level=info msg="Starting up" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.020714500Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.020731100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.020742600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.020747700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.021439000Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.021454500Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.021462600Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.021468200Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.028270100Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031623500Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031639700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031643800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031647900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031651000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031653600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.031777200Z" level=info msg="Loading containers: start." Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.064528800Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.078403800Z" level=info msg="Loading containers: done." Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.085892200Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.085930900Z" level=info msg="Daemon has completed initialization" Mar 26 16:56:46 minikube systemd[1]: Started Docker Application Container Engine. Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.098672800Z" level=info msg="API listen on [::]:2376" Mar 26 16:56:46 minikube dockerd[473]: time="2022-03-26T16:56:46.100435500Z" level=info msg="API listen on /var/run/docker.sock" Mar 26 16:57:34 minikube dockerd[473]: time="2022-03-26T16:57:34.386548700Z" level=info msg="ignoring event" container=16be0bf1777b815c3be5eaa8312a8591970d372804bd01a4eb7efb169b55f634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 166438608cc3c 6e38f40d628db About a minute ago Running storage-provisioner 1 0280e78e460ee d6ed1950bf2fc 9b7cc99821098 About a minute ago Running kube-proxy 0 77b870a2dc46d 16be0bf1777b8 6e38f40d628db About a minute ago Exited storage-provisioner 0 0280e78e460ee 9aea94b86bfca a4ca41631cc7a About a minute ago Running coredns 0 060864938a7fc 3e180e81773ac 25f8c7f3da61c About a minute ago Running etcd 0 cc51e95adf54f 0ff52845852a4 f40be0088a83e About a minute ago Running kube-apiserver 0 bf993baf2873a 565e7165dfa09 99a3486be4f28 About a minute ago Running kube-scheduler 0 bbddc1a2ee85f abf7fc0faa433 b07520cd7ab76 About a minute ago Running kube-controller-manager 0 7b4802706fefa * * ==> coredns [9aea94b86bfc] <== * [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=597367675a06de09f6c1768f8c07b7f7c1e6101a minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_03_26T17_57_00_0700 minikube.k8s.io/version=v1.25.2 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 26 Mar 2022 16:56:56 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 26 Mar 2022 16:58:40 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 26 Mar 2022 16:57:09 +0000 Sat, 26 Mar 2022 16:56:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 26 Mar 2022 16:57:09 +0000 Sat, 26 Mar 2022 16:56:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 26 Mar 2022 16:57:09 +0000 Sat, 26 Mar 2022 16:56:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 26 Mar 2022 16:57:09 +0000 Sat, 26 Mar 2022 16:57:09 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 16 ephemeral-storage: 263174212Ki hugepages-2Mi: 0 memory: 26203524Ki pods: 110 Allocatable: cpu: 16 ephemeral-storage: 263174212Ki hugepages-2Mi: 0 memory: 26203524Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: b6a262faae404a5db719705fd34b5c8b Boot ID: 8d8c5dd8-0334-4ef4-bfaa-80fdfaa43db4 Kernel Version: 5.4.72-microsoft-standard-WSL2 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.3 Kube-Proxy Version: v1.23.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (7 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-64897985d-p57sj 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 97s kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 112s kube-system kube-apiserver-minikube 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 112s kube-system kube-controller-manager-minikube 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 110s kube-system kube-proxy-24m85 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 97s kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 111s kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 108s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (4%!)(MISSING) 0 (0%!)(MISSING) memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 95s kube-proxy Normal NodeHasSufficientMemory 116s (x4 over 116s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 116s (x4 over 116s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 116s (x4 over 116s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 110s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 110s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 110s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 110s kubelet Updated Node Allocatable limit across pods Normal Starting 110s kubelet Starting kubelet. Normal NodeReady 100s kubelet Node minikube status is now: NodeReady * * ==> dmesg <== * [ +0.015639] init: (1) ERROR: UpdateTimezone:97: Europe/Paris timezone not found. Is the tzdata package installed? [ +0.000006] init: (1) ERROR: InitEntryUtilityVm:2434: UpdateTimezone failed [ +0.103222] FS-Cache: Duplicate cookie detected [ +0.000005] FS-Cache: O-cookie c=000000000a7c6fe3 [p=0000000097f9b222 fl=222 nc=0 na=1] [ +0.000001] FS-Cache: O-cookie d=00000000b88a8435 n=00000000e9364b18 [ +0.000001] FS-Cache: O-key=[10] '34323935323237393839' [ +0.000002] FS-Cache: N-cookie c=00000000e8726bd6 [p=0000000097f9b222 fl=2 nc=0 na=1] [ +0.000001] FS-Cache: N-cookie d=00000000b88a8435 n=0000000091582a2c [ +0.000000] FS-Cache: N-key=[10] '34323935323237393839' [ +0.000155] init: (1) ERROR: ConfigApplyWindowsLibPath:2129: open /etc/ld.so.conf.d/ld.wsl.conf [ +0.000001] failed 2 [ +0.002233] FS-Cache: Duplicate cookie detected [ +0.000002] FS-Cache: O-cookie c=000000000a7c6fe3 [p=0000000097f9b222 fl=222 nc=0 na=1] [ +0.000001] FS-Cache: O-cookie d=00000000b88a8435 n=00000000e9364b18 [ +0.000001] FS-Cache: O-key=[10] '34323935323237393839' [ +0.000001] FS-Cache: N-cookie c=00000000773f5a7c [p=0000000097f9b222 fl=2 nc=0 na=1] [ +0.000001] FS-Cache: N-cookie d=00000000b88a8435 n=00000000624c7fc6 [ +0.000001] FS-Cache: N-key=[10] '34323935323237393839' [ +0.001414] FS-Cache: Duplicate cookie detected [ +0.000002] FS-Cache: O-cookie c=000000000a7c6fe3 [p=0000000097f9b222 fl=222 nc=0 na=1] [ +0.000002] FS-Cache: O-cookie d=00000000b88a8435 n=00000000e9364b18 [ +0.000000] FS-Cache: O-key=[10] '34323935323237393839' [ +0.000002] FS-Cache: N-cookie c=000000001989f048 [p=0000000097f9b222 fl=2 nc=0 na=1] [ +0.000001] FS-Cache: N-cookie d=00000000b88a8435 n=00000000f6b8bdf1 [ +0.000000] FS-Cache: N-key=[10] '34323935323237393839' [ +0.002811] init: (1) ERROR: UpdateTimezone:97: Europe/Paris timezone not found. Is the tzdata package installed? [ +0.000005] init: (1) ERROR: InitEntryUtilityVm:2434: UpdateTimezone failed [ +0.094473] FS-Cache: Duplicate cookie detected [ +0.000005] FS-Cache: O-cookie c=000000001989f048 [p=0000000097f9b222 fl=222 nc=0 na=1] [ +0.000001] FS-Cache: O-cookie d=00000000b88a8435 n=00000000813de212 [ +0.000001] FS-Cache: O-key=[10] '34323935323237393939' [ +0.000002] FS-Cache: N-cookie c=0000000028bf8ce7 [p=0000000097f9b222 fl=2 nc=0 na=1] [ +0.000001] FS-Cache: N-cookie d=00000000b88a8435 n=000000001667f8c6 [ +0.000000] FS-Cache: N-key=[10] '34323935323237393939' [ +0.000169] init: (1) ERROR: ConfigApplyWindowsLibPath:2129: open /etc/ld.so.conf.d/ld.wsl.conf [ +0.000001] failed 2 [ +0.000951] init: (2) ERROR: UtilCreateProcessAndWait:486: /bin/mount failed with 2 [ +0.000060] init: (1) ERROR: UtilCreateProcessAndWait:501: /bin/mount failed with status 0x [ +0.000001] ff00 [ +0.000004] init: (1) ERROR: ConfigMountFsTab:2184: Processing fstab with mount -a failed. [ +0.000400] init: (3) ERROR: UtilCreateProcessAndWait:486: /bin/mount failed with 2 [ +0.000055] init: (1) ERROR: UtilCreateProcessAndWait:501: /bin/mount failed with status 0x [ +0.000001] ff00 [ +0.000005] init: (1) ERROR: MountPlan9:493: mount cache=mmap,noatime,trans=fd,rfdno=8,wfdno=8,msize=65536,aname=drvfs;path=C:\;uid=0;gid=0;symlinkroot=/mnt/ [ +10.069751] WSL2: Performing memory compaction. [Mar26 16:33] WSL2: Performing memory compaction. [Mar26 16:35] WSL2: Performing memory compaction. [Mar26 16:37] WSL2: Performing memory compaction. [Mar26 16:38] WSL2: Performing memory compaction. [Mar26 16:39] WSL2: Performing memory compaction. [Mar26 16:40] WSL2: Performing memory compaction. [Mar26 16:42] WSL2: Performing memory compaction. [Mar26 16:45] WSL2: Performing memory compaction. [Mar26 16:47] WSL2: Performing memory compaction. [Mar26 16:49] WSL2: Performing memory compaction. [Mar26 16:52] WSL2: Performing memory compaction. [Mar26 16:54] WSL2: Performing memory compaction. [Mar26 16:55] WSL2: Performing memory compaction. [Mar26 16:56] WSL2: Performing memory compaction. [Mar26 16:57] WSL2: Performing memory compaction. * * ==> etcd [3e180e81773a] <== * {"level":"info","ts":"2022-03-26T16:56:54.659Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2022-03-26T16:56:54.659Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-03-26T16:56:54.659Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-03-26T16:56:54.660Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2022-03-26T16:56:54.660Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.1","git-sha":"e8732fb5f","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":16,"max-cpu-available":16,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2022-03-26T16:56:54.663Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"2.12ms"} {"level":"info","ts":"2022-03-26T16:56:54.666Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2022-03-26T16:56:54.666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2022-03-26T16:56:54.666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2022-03-26T16:56:54.666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2022-03-26T16:56:54.666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2022-03-26T16:56:54.666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2022-03-26T16:56:54.668Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2022-03-26T16:56:54.670Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2022-03-26T16:56:54.671Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2022-03-26T16:56:54.672Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"} {"level":"info","ts":"2022-03-26T16:56:54.672Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2022-03-26T16:56:54.673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2022-03-26T16:56:54.673Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2022-03-26T16:56:54.673Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2022-03-26T16:56:54.673Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2022-03-26T16:56:54.673Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2022-03-26T16:56:54.673Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-03-26T16:56:54.673Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2022-03-26T16:56:55.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2022-03-26T16:56:55.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2022-03-26T16:56:55.567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2022-03-26T16:56:55.568Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2022-03-26T16:56:55.569Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"} {"level":"info","ts":"2022-03-26T16:56:55.569Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"} * * ==> kernel <== * 16:58:49 up 1:14, 0 users, load average: 0.03, 0.07, 0.03 Linux minikube 5.4.72-microsoft-standard-WSL2 #1 SMP Wed Oct 28 23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [0ff52845852a] <== * W0326 16:56:55.908406 1 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources. W0326 16:56:55.908428 1 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources. W0326 16:56:55.909659 1 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0326 16:56:55.911946 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0326 16:56:55.911960 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0326 16:56:55.951918 1 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0326 16:56:56.529058 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0326 16:56:56.529150 1 secure_serving.go:266] Serving securely on [::]:8443 I0326 16:56:56.529171 1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key" I0326 16:56:56.529208 1 available_controller.go:491] Starting AvailableConditionController I0326 16:56:56.529243 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0326 16:56:56.529260 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0326 16:56:56.529272 1 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0326 16:56:56.529182 1 controller.go:83] Starting OpenAPI AggregationController I0326 16:56:56.529400 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0326 16:56:56.529487 1 autoregister_controller.go:141] Starting autoregister controller I0326 16:56:56.529510 1 cache.go:32] Waiting for caches to sync for autoregister controller I0326 16:56:56.529586 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0326 16:56:56.529606 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0326 16:56:56.529611 1 controller.go:85] Starting OpenAPI controller I0326 16:56:56.529625 1 apf_controller.go:317] Starting API Priority and Fairness config controller I0326 16:56:56.529599 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0326 16:56:56.529631 1 naming_controller.go:291] Starting NamingConditionController I0326 16:56:56.529651 1 establishing_controller.go:76] Starting EstablishingController I0326 16:56:56.529677 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0326 16:56:56.529698 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0326 16:56:56.529712 1 crd_finalizer.go:266] Starting CRDFinalizer I0326 16:56:56.529838 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0326 16:56:56.529862 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0326 16:56:56.529878 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0326 16:56:56.529881 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0326 16:56:56.529903 1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0326 16:56:56.533505 1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0326 16:56:56.554025 1 controller.go:611] quota admission added evaluator for: namespaces I0326 16:56:56.569034 1 shared_informer.go:247] Caches are synced for node_authorizer I0326 16:56:56.649891 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0326 16:56:56.649908 1 apf_controller.go:322] Running API Priority and Fairness config worker I0326 16:56:56.649943 1 shared_informer.go:247] Caches are synced for crd-autoregister I0326 16:56:56.649982 1 cache.go:39] Caches are synced for autoregister controller I0326 16:56:56.649984 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0326 16:56:56.650031 1 cache.go:39] Caches are synced for AvailableConditionController controller I0326 16:56:57.530058 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0326 16:56:57.530084 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0326 16:56:57.533011 1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000 I0326 16:56:57.535400 1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000 I0326 16:56:57.535415 1 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0326 16:56:57.768274 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0326 16:56:57.783724 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0326 16:56:57.859291 1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1] W0326 16:56:57.861574 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0326 16:56:57.861997 1 controller.go:611] quota admission added evaluator for: endpoints I0326 16:56:57.863670 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io I0326 16:56:58.658130 1 controller.go:611] quota admission added evaluator for: serviceaccounts I0326 16:56:59.086715 1 controller.go:611] quota admission added evaluator for: deployments.apps I0326 16:56:59.090602 1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10] I0326 16:56:59.193417 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0326 16:56:59.254659 1 controller.go:611] quota admission added evaluator for: daemonsets.apps I0326 16:57:12.565187 1 controller.go:611] quota admission added evaluator for: replicasets.apps I0326 16:57:12.774161 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps I0326 16:57:13.818332 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io * * ==> kube-controller-manager [abf7fc0faa43] <== * I0326 16:57:12.475299 1 controllermanager.go:605] Started "pv-protection" I0326 16:57:12.475376 1 pv_protection_controller.go:79] Starting PV protection controller I0326 16:57:12.475384 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0326 16:57:12.477307 1 controllermanager.go:605] Started "csrapproving" I0326 16:57:12.477440 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I0326 16:57:12.477462 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I0326 16:57:12.479699 1 shared_informer.go:240] Waiting for caches to sync for resource quota W0326 16:57:12.483084 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I0326 16:57:12.485995 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0326 16:57:12.507824 1 shared_informer.go:247] Caches are synced for ReplicaSet I0326 16:57:12.508922 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0326 16:57:12.508936 1 shared_informer.go:247] Caches are synced for endpoint I0326 16:57:12.508961 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0326 16:57:12.508977 1 shared_informer.go:247] Caches are synced for ephemeral I0326 16:57:12.508994 1 shared_informer.go:247] Caches are synced for persistent volume I0326 16:57:12.509056 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0326 16:57:12.509056 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0326 16:57:12.510134 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0326 16:57:12.512660 1 shared_informer.go:247] Caches are synced for job I0326 16:57:12.552827 1 shared_informer.go:247] Caches are synced for TTL after finished I0326 16:57:12.557725 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0326 16:57:12.557768 1 shared_informer.go:247] Caches are synced for attach detach I0326 16:57:12.557933 1 shared_informer.go:247] Caches are synced for crt configmap I0326 16:57:12.557967 1 shared_informer.go:247] Caches are synced for service account I0326 16:57:12.558054 1 shared_informer.go:247] Caches are synced for taint I0326 16:57:12.558128 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0326 16:57:12.558173 1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp. I0326 16:57:12.558209 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0326 16:57:12.558129 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0326 16:57:12.558271 1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0326 16:57:12.558737 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0326 16:57:12.558826 1 shared_informer.go:247] Caches are synced for HPA I0326 16:57:12.559496 1 shared_informer.go:247] Caches are synced for expand I0326 16:57:12.559540 1 shared_informer.go:247] Caches are synced for deployment I0326 16:57:12.561629 1 shared_informer.go:247] Caches are synced for GC I0326 16:57:12.563823 1 shared_informer.go:247] Caches are synced for namespace I0326 16:57:12.567004 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 1" I0326 16:57:12.569303 1 shared_informer.go:247] Caches are synced for node I0326 16:57:12.569338 1 range_allocator.go:173] Starting range CIDR allocator I0326 16:57:12.569342 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0326 16:57:12.569349 1 shared_informer.go:247] Caches are synced for cidrallocator I0326 16:57:12.571730 1 shared_informer.go:247] Caches are synced for PVC protection I0326 16:57:12.572312 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0326 16:57:12.575151 1 shared_informer.go:247] Caches are synced for TTL I0326 16:57:12.575510 1 shared_informer.go:247] Caches are synced for PV protection I0326 16:57:12.576466 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-p57sj" I0326 16:57:12.577505 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0326 16:57:12.577999 1 shared_informer.go:247] Caches are synced for endpoint_slice I0326 16:57:12.581626 1 shared_informer.go:247] Caches are synced for disruption I0326 16:57:12.581645 1 disruption.go:371] Sending events to api server. I0326 16:57:12.658423 1 shared_informer.go:247] Caches are synced for cronjob I0326 16:57:12.658676 1 shared_informer.go:247] Caches are synced for ReplicationController I0326 16:57:12.769019 1 shared_informer.go:247] Caches are synced for daemon sets I0326 16:57:12.772997 1 shared_informer.go:247] Caches are synced for stateful set I0326 16:57:12.776677 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-24m85" I0326 16:57:12.779930 1 shared_informer.go:247] Caches are synced for resource quota I0326 16:57:12.813214 1 shared_informer.go:247] Caches are synced for resource quota I0326 16:57:13.186458 1 shared_informer.go:247] Caches are synced for garbage collector I0326 16:57:13.266585 1 shared_informer.go:247] Caches are synced for garbage collector I0326 16:57:13.266602 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * * ==> kube-proxy [d6ed1950bf2f] <== * E0326 16:57:13.798211 1 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.4.72-microsoft-standard-WSL2/modules.builtin: no such file or directory" filePath="/lib/modules/5.4.72-microsoft-standard-WSL2/modules.builtin" I0326 16:57:13.799312 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs" I0326 16:57:13.799980 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr" I0326 16:57:13.800637 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr" I0326 16:57:13.801277 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh" I0326 16:57:13.801906 1 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack" I0326 16:57:13.806331 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0326 16:57:13.806350 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0326 16:57:13.806370 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0326 16:57:13.815800 1 server_others.go:206] "Using iptables Proxier" I0326 16:57:13.815825 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0326 16:57:13.815830 1 server_others.go:214] "Creating dualStackProxier for iptables" I0326 16:57:13.815839 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0326 16:57:13.816681 1 server.go:656] "Version info" version="v1.23.3" I0326 16:57:13.817406 1 config.go:226] "Starting endpoint slice config controller" I0326 16:57:13.817430 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0326 16:57:13.817415 1 config.go:317] "Starting service config controller" I0326 16:57:13.817444 1 shared_informer.go:240] Waiting for caches to sync for service config I0326 16:57:13.917953 1 shared_informer.go:247] Caches are synced for service config I0326 16:57:13.917962 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [565e7165dfa0] <== * I0326 16:56:55.068454 1 serving.go:348] Generated self-signed cert in-memory W0326 16:56:56.552722 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0326 16:56:56.552756 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0326 16:56:56.552766 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous. W0326 16:56:56.552773 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0326 16:56:56.558387 1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.3" I0326 16:56:56.559082 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0326 16:56:56.559095 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0326 16:56:56.559097 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0326 16:56:56.559084 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259 W0326 16:56:56.559963 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0326 16:56:56.560008 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0326 16:56:56.560879 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0326 16:56:56.560916 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0326 16:56:56.560965 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0326 16:56:56.560973 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0326 16:56:56.561042 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0326 16:56:56.561078 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0326 16:56:56.561556 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0326 16:56:56.561571 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0326 16:56:56.562010 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0326 16:56:56.562012 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0326 16:56:56.562036 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0326 16:56:56.561582 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0326 16:56:56.562429 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0326 16:56:56.562437 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0326 16:56:56.562452 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0326 16:56:56.562456 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0326 16:56:56.562450 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0326 16:56:56.562466 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0326 16:56:56.562490 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0326 16:56:56.562510 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0326 16:56:56.562526 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0326 16:56:56.562547 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0326 16:56:56.562564 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0326 16:56:56.562563 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0326 16:56:56.562601 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0326 16:56:56.562609 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0326 16:56:56.562628 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0326 16:56:56.562632 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0326 16:56:57.416065 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0326 16:56:57.416091 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0326 16:56:57.433798 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0326 16:56:57.433819 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0326 16:56:57.441477 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0326 16:56:57.441494 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0326 16:56:57.510549 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0326 16:56:57.510569 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0326 16:56:57.525180 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0326 16:56:57.525207 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0326 16:56:57.537016 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0326 16:56:57.537037 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0326 16:56:57.612024 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0326 16:56:57.612046 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0326 16:57:00.559847 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Sat 2022-03-26 16:56:39 UTC, end at Sat 2022-03-26 16:58:49 UTC. -- Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.297626 1995 kubelet_node_status.go:108] "Node was previously registered" node="minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.297683 1995 kubelet_node_status.go:73] "Successfully registered node" node="minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.357347 1995 cpu_manager.go:213] "Starting CPU manager" policy="none" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.357374 1995 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.357384 1995 state_mem.go:36] "Initialized new in-memory state store" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.357453 1995 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.357460 1995 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.357463 1995 policy_none.go:49] "None policy: Start" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.358698 1995 memory_manager.go:168] "Starting memorymanager" policy="None" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.358715 1995 state_mem.go:35] "Initializing new in-memory state store" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.358789 1995 state_mem.go:75] "Updated machine memory state" Mar 26 16:56:59 minikube kubelet[1995]: E0326 16:56:59.359008 1995 kubelet.go:2001] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.359359 1995 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.359488 1995 plugin_manager.go:114] "Starting Kubelet Plugin Manager" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.559112 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.559219 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.559251 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.559269 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:56:59 minikube kubelet[1995]: E0326 16:56:59.562598 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Mar 26 16:56:59 minikube kubelet[1995]: E0326 16:56:59.584213 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589500 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9d3d310935e5fabe942511eec3e2cd0c-etcd-certs\") pod \"etcd-minikube\" (UID: \"9d3d310935e5fabe942511eec3e2cd0c\") " pod="kube-system/etcd-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589527 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9d3d310935e5fabe942511eec3e2cd0c-etcd-data\") pod \"etcd-minikube\" (UID: \"9d3d310935e5fabe942511eec3e2cd0c\") " pod="kube-system/etcd-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589544 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589554 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589565 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589580 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589598 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be132fe5c6572cb34d93f5e05ce2a540-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"be132fe5c6572cb34d93f5e05ce2a540\") " pod="kube-system/kube-scheduler-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589637 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589663 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589704 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589735 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589751 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589767 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589782 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd6e47233d36a9715b0ab9632f871843-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"cd6e47233d36a9715b0ab9632f871843\") " pod="kube-system/kube-apiserver-minikube" Mar 26 16:56:59 minikube kubelet[1995]: I0326 16:56:59.589809 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b965983ec05322d0973594a01d5e8245-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"b965983ec05322d0973594a01d5e8245\") " pod="kube-system/kube-controller-manager-minikube" Mar 26 16:56:59 minikube kubelet[1995]: E0326 16:56:59.785084 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Mar 26 16:57:00 minikube kubelet[1995]: I0326 16:57:00.181990 1995 apiserver.go:52] "Watching apiserver" Mar 26 16:57:00 minikube kubelet[1995]: I0326 16:57:00.393271 1995 reconciler.go:157] "Reconciler: start to sync state" Mar 26 16:57:00 minikube kubelet[1995]: E0326 16:57:00.786490 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Mar 26 16:57:00 minikube kubelet[1995]: E0326 16:57:00.986625 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Mar 26 16:57:01 minikube kubelet[1995]: E0326 16:57:01.185653 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Mar 26 16:57:01 minikube kubelet[1995]: I0326 16:57:01.381885 1995 request.go:665] Waited for 1.1090174s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods Mar 26 16:57:01 minikube kubelet[1995]: E0326 16:57:01.387069 1995 kubelet.go:1711] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.564704 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.579199 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.650493 1995 kuberuntime_manager.go:1098] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.650540 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdvd\" (UniqueName: \"kubernetes.io/projected/2c1a734d-eff0-490e-a396-8cdb9b199a8c-kube-api-access-6zdvd\") pod \"storage-provisioner\" (UID: \"2c1a734d-eff0-490e-a396-8cdb9b199a8c\") " pod="kube-system/storage-provisioner" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.650618 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59739ee9-5a02-4670-b78c-65efdffb4ed6-config-volume\") pod \"coredns-64897985d-p57sj\" (UID: \"59739ee9-5a02-4670-b78c-65efdffb4ed6\") " pod="kube-system/coredns-64897985d-p57sj" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.650649 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x878j\" (UniqueName: \"kubernetes.io/projected/59739ee9-5a02-4670-b78c-65efdffb4ed6-kube-api-access-x878j\") pod \"coredns-64897985d-p57sj\" (UID: \"59739ee9-5a02-4670-b78c-65efdffb4ed6\") " pod="kube-system/coredns-64897985d-p57sj" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.650667 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2c1a734d-eff0-490e-a396-8cdb9b199a8c-tmp\") pod \"storage-provisioner\" (UID: \"2c1a734d-eff0-490e-a396-8cdb9b199a8c\") " pod="kube-system/storage-provisioner" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.650938 1995 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.651056 1995 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.779323 1995 topology_manager.go:200] "Topology Admit Handler" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.851205 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3edb82c0-e0ec-44ce-948e-037d0aa178c1-kube-proxy\") pod \"kube-proxy-24m85\" (UID: \"3edb82c0-e0ec-44ce-948e-037d0aa178c1\") " pod="kube-system/kube-proxy-24m85" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.851240 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3edb82c0-e0ec-44ce-948e-037d0aa178c1-xtables-lock\") pod \"kube-proxy-24m85\" (UID: \"3edb82c0-e0ec-44ce-948e-037d0aa178c1\") " pod="kube-system/kube-proxy-24m85" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.851256 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3edb82c0-e0ec-44ce-948e-037d0aa178c1-lib-modules\") pod \"kube-proxy-24m85\" (UID: \"3edb82c0-e0ec-44ce-948e-037d0aa178c1\") " pod="kube-system/kube-proxy-24m85" Mar 26 16:57:12 minikube kubelet[1995]: I0326 16:57:12.851279 1995 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbx4j\" (UniqueName: \"kubernetes.io/projected/3edb82c0-e0ec-44ce-948e-037d0aa178c1-kube-api-access-dbx4j\") pod \"kube-proxy-24m85\" (UID: \"3edb82c0-e0ec-44ce-948e-037d0aa178c1\") " pod="kube-system/kube-proxy-24m85" Mar 26 16:57:13 minikube kubelet[1995]: I0326 16:57:13.154954 1995 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-p57sj through plugin: invalid network status for" Mar 26 16:57:13 minikube kubelet[1995]: I0326 16:57:13.303837 1995 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-p57sj through plugin: invalid network status for" Mar 26 16:57:35 minikube kubelet[1995]: I0326 16:57:35.397622 1995 scope.go:110] "RemoveContainer" containerID="16be0bf1777b815c3be5eaa8312a8591970d372804bd01a4eb7efb169b55f634" * * ==> storage-provisioner [166438608cc3] <== * I0326 16:57:35.450266 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0326 16:57:35.455363 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0326 16:57:35.455394 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0326 16:57:35.465791 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0326 16:57:35.465875 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_fef53ec0-680f-4b73-a833-51e1ad527db3! I0326 16:57:35.465889 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb8cbb3b-3520-4f62-879f-1bafab62b928", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_fef53ec0-680f-4b73-a833-51e1ad527db3 became leader I0326 16:57:35.566866 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_fef53ec0-680f-4b73-a833-51e1ad527db3! * * ==> storage-provisioner [16be0bf1777b] <== * I0326 16:57:13.328174 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0326 16:57:34.378228 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused