Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

preload causes /var conflicts with Docker Desktop File sharing #8100

Closed
plnordquist opened this issue May 12, 2020 · 16 comments
Closed

preload causes /var conflicts with Docker Desktop File sharing #8100

plnordquist opened this issue May 12, 2020 · 16 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container co/preload kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/duplicate Indicates an issue is a duplicate of other open issue.
Milestone

Comments

@plnordquist
Copy link

Steps to reproduce the issue:

  1. Upgrade to Minikube 1.10.0
  2. minikube start with driver set to docker
  3. Minikube fails to start

Full output of failed command:

minikube start --alsologtostderr
I0512 13:10:46.817371    1596 start.go:99] hostinfo: {"hostname":"<system-hostname>","uptime":2584,"bootTime":1589311662,"procs":283,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.18362 Build 18362","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"2ff1be69-d9b0-46b2-b9e2-f8e389f49971"}
W0512 13:10:46.818371    1596 start.go:107] gopshost.Virtualization returned error: not implemented yet
* minikube v1.10.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
I0512 13:10:46.825382    1596 notify.go:125] Checking for updates...
I0512 13:10:46.825382    1596 driver.go:253] Setting default libvirt URI to qemu:///system
I0512 13:10:47.256373    1596 docker.go:95] docker version: linux-19.03.8
* Using the docker driver based on user configuration
I0512 13:10:47.259375    1596 start.go:215] selected driver: docker
I0512 13:10:47.259375    1596 start.go:594] validating driver "docker" against <nil>
I0512 13:10:47.259375    1596 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0512 13:10:47.260343    1596 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0512 13:10:47.261385    1596 start_flags.go:217] no existing cluster config was found, will generate one from the flags
I0512 13:10:47.271382    1596 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0512 13:10:48.880977    1596 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (1.6086868s)
I0512 13:10:48.881636    1596 start_flags.go:231] Using suggested 1991MB memory alloc based on sys=16108MB, container=1991MB
I0512 13:10:48.881636    1596 start_flags.go:558] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node minikube in cluster minikube
I0512 13:10:48.886671    1596 cache.go:104] Beginning downloading kic artifacts for docker with docker
I0512 13:10:49.332606    1596 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0512 13:10:49.332606    1596 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0512 13:10:49.333432    1596 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0512 13:10:49.333432    1596 cache.go:48] Caching tarball of preloaded images
I0512 13:10:49.333432    1596 preload.go:122] Found C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0512 13:10:49.334468    1596 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.1 on docker
I0512 13:10:49.335430    1596 profile.go:156] Saving config to C:\Users\<user>\.minikube\profiles\minikube\config.json ...
I0512 13:10:49.335430    1596 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\config.json: {Name:mkefe1ed68ad1dcc9d856414ff8d3673a072cb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0512 13:10:49.337430    1596 cache.go:132] Successfully downloaded all kic artifacts
I0512 13:10:49.337430    1596 start.go:223] acquiring machines lock for minikube: {Name:mk71de99f9d15522919eee1cb7da11f7d05e4fb9 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0512 13:10:49.338431    1596 start.go:227] acquired machines lock for "minikube" in 0s
I0512 13:10:49.338431    1596 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}
I0512 13:10:49.339430    1596 start.go:104] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=1991MB) ...
I0512 13:10:49.343432    1596 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0512 13:10:49.343432    1596 client.go:161] LocalClient.Create starting
I0512 13:10:49.343432    1596 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\ca.pem
I0512 13:10:49.344433    1596 main.go:110] libmachine: Decoding PEM data...
I0512 13:10:49.344433    1596 main.go:110] libmachine: Parsing certificate...
I0512 13:10:49.345432    1596 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\cert.pem
I0512 13:10:49.345432    1596 main.go:110] libmachine: Decoding PEM data...
I0512 13:10:49.345432    1596 main.go:110] libmachine: Parsing certificate...
I0512 13:10:49.366465    1596 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0512 13:10:49.789145    1596 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0512 13:10:50.203824    1596 oci.go:98] Successfully created a docker volume minikube
I0512 13:10:50.203824    1596 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0512 13:10:50.204844    1596 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0512 13:10:50.204844    1596 kic.go:134] Starting extracting preloaded images to volume ...
I0512 13:10:50.213827    1596 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0512 13:10:50.214825    1596 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0512 13:10:51.899585    1596 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (1.6848685s)
I0512 13:10:51.912585    1596 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0512 13:10:53.875153    1596 cli_runner.go:150] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.9616956s)
I0512 13:10:53.888156    1596 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0512 13:10:55.642476    1596 cli_runner.go:150] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: (1.7534331s)
I0512 13:10:55.658476    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:56.423966    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:56.904471    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:57.380006    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:57.855038    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:58.358009    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:58.904517    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:10:59.414551    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:00.022509    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:00.313375    1596 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (10.0991987s)
I0512 13:11:00.314377    1596 kic.go:139] duration metric: took 10.110182 seconds to extract preloaded images to volume
I0512 13:11:00.628027    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:01.452107    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:02.646238    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:04.522723    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:06.141347    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:08.772881    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:09.206804    1596 client.go:164] LocalClient.Create took 19.8646485s
I0512 13:11:11.207458    1596 start.go:107] duration metric: createHost completed in 21.8694343s
I0512 13:11:11.208449    1596 start.go:74] releasing machines lock for "minikube", held for 21.8714243s
I0512 13:11:11.230473    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0512 13:11:11.627064    1596 stop.go:36] StopHost: minikube
* Stopping "minikube" in docker ...
I0512 13:11:11.649907    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0512 13:11:12.081859    1596 stop.go:76] host is in state Stopped
I0512 13:11:12.081859    1596 main.go:110] libmachine: Stopping "minikube"...
I0512 13:11:12.098543    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0512 13:11:12.509620    1596 stop.go:56] stop err: Machine "minikube" is already stopped.
I0512 13:11:12.509620    1596 stop.go:59] host is already stopped
* Deleting "minikube" in docker ...
I0512 13:11:13.523370    1596 cli_runner.go:108] Run: docker inspect -f {{.Id}} minikube
I0512 13:11:13.938754    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0512 13:11:14.366730    1596 cli_runner.go:108] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0512 13:11:14.800630    1596 oci.go:544] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:

stderr:
Error response from daemon: Container f8e9eb52c95c0aec09fb6a969c11adba1966b3436450e18a4bf1a2beb13a969b is not running
I0512 13:11:15.810620    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0512 13:11:16.244043    1596 oci.go:552] container minikube status is Stopped
I0512 13:11:16.244043    1596 oci.go:564] Successfully shutdown container minikube
I0512 13:11:16.252996    1596 cli_runner.go:108] Run: docker rm -f -v minikube
I0512 13:11:16.706039    1596 cli_runner.go:108] Run: docker inspect -f {{.Id}} minikube
! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
I0512 13:11:22.148712    1596 start.go:223] acquiring machines lock for minikube: {Name:mk71de99f9d15522919eee1cb7da11f7d05e4fb9 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0512 13:11:22.149389    1596 start.go:227] acquired machines lock for "minikube" in 677.3µs
I0512 13:11:22.149389    1596 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.1 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.1 ControlPlane:true Worker:true}
I0512 13:11:22.151427    1596 start.go:104] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=1991MB) ...
I0512 13:11:22.154434    1596 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0512 13:11:22.155431    1596 client.go:161] LocalClient.Create starting
I0512 13:11:22.155431    1596 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\ca.pem
I0512 13:11:22.155431    1596 main.go:110] libmachine: Decoding PEM data...
I0512 13:11:22.156391    1596 main.go:110] libmachine: Parsing certificate...
I0512 13:11:22.156391    1596 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\cert.pem
I0512 13:11:22.157388    1596 main.go:110] libmachine: Decoding PEM data...
I0512 13:11:22.157388    1596 main.go:110] libmachine: Parsing certificate...
I0512 13:11:22.184385    1596 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0512 13:11:22.603238    1596 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0512 13:11:23.004004    1596 oci.go:98] Successfully created a docker volume minikube
I0512 13:11:23.004004    1596 preload.go:81] Checking if preload exists for k8s version v1.18.1 and runtime docker
I0512 13:11:23.005005    1596 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4
I0512 13:11:23.005005    1596 kic.go:134] Starting extracting preloaded images to volume ...
I0512 13:11:23.017003    1596 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0512 13:11:23.018003    1596 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0512 13:11:24.756757    1596 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (1.7398651s)
I0512 13:11:24.771748    1596 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0512 13:11:26.469200    1596 cli_runner.go:150] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.6975615s)
I0512 13:11:26.485175    1596 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0512 13:11:27.860121    1596 cli_runner.go:150] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: (1.3740362s)
I0512 13:11:27.877114    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:28.897093    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:29.344727    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:29.793382    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:30.276373    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:30.819817    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:31.345677    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:31.878231    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:32.535317    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:33.317587    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:33.884649    1596 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (10.8663445s)
I0512 13:11:33.884649    1596 kic.go:139] duration metric: took 10.880343 seconds to extract preloaded images to volume
I0512 13:11:34.424257    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:35.454497    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:36.773882    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:39.343600    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:41.571892    1596 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0512 13:11:41.972575    1596 client.go:164] LocalClient.Create took 19.8184186s
I0512 13:11:43.972842    1596 start.go:107] duration metric: createHost completed in 21.8228181s
I0512 13:11:43.973639    1596 start.go:74] releasing machines lock for "minikube", held for 21.8256537s
* Failed to start docker container. "minikube start" may fix it: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
I0512 13:11:43.975589    1596 exit.go:58] WithError(error provisioning host)=Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet called from:
goroutine 1 [running]:
runtime/debug.Stack(0x40acf1, 0x18d3660, 0x18b8300)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xa4
k8s.io/minikube/pkg/minikube/exit.WithError(0x1b3f8ac, 0x17, 0x1dfc1c0, 0xc000114860)
        /app/pkg/minikube/exit/exit.go:58 +0x3b
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b53760, 0xc0001b7fb0, 0x0, 0x1)
        /app/cmd/minikube/cmd/start.go:170 +0xac9
github.com/spf13/cobra.(*Command).execute(0x2b53760, 0xc0001b7fa0, 0x1, 0x1, 0x2b53760, 0xc0001b7fa0)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2b1
github.com/spf13/cobra.(*Command).ExecuteC(0x2b527a0, 0x0, 0x0, 0xc0002f0a01)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x350
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:112 +0x6f5
main.main()
        /app/cmd/minikube/main.go:66 +0xf1
W0512 13:11:43.983577    1596 out.go:201] error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
*
X error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

Minikube Container logs:

INFO: ensuring we can execute /bin/mount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: fix cgroup mounts for all subsystems
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
update-alternatives: error: no alternatives for iptables
@medyagh
Copy link
Member

medyagh commented May 12, 2020

@plnordquist thanks for creaitng this issue, I am not sure what is the root cause of this, but have u tried minikube delete and start again ?

I am wondering if minikube is trying to reuse the older version of container that had a different image

I also see this:

I0512 13:11:14.800630    1596 oci.go:544] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1

that makes me believe the container was not able to shutdown and it was stuck, you might want to restart docker.

do you mind sharing how much RAM your docker desktop has ?

@medyagh medyagh added co/docker-driver Issues related to kubernetes in container triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels May 12, 2020
@plnordquist
Copy link
Author

My docker desktop has 2GB of ram. If I factory reset my Docker Desktop, minikube can start successfully. Once I minikube stop, minikube delete, and minikube start again, it fails to start with the same container logs I posted in the initial post. I upgraded to v1.10.1 to test this again and I'm seeing the same behavior. Here's some more logs of the successful start and the failed start. I'm running Docker Desktop Edge v2.3.0.1 with the Hyper-V backend. In the failure scenario, minikube logs fails to produce output since the minikube docker container is stopped and the control plane is not running.

Good minikube start --alsologtostderr logs:

I0513 10:16:54.699384   22028 start.go:99] hostinfo: {"hostname":"<system-hostname>","uptime":78552,"bootTime":1589311662,"procs":277,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.18362 Build 18362","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"2ff1be69-d9b0-46b2-b9e2-f8e389f49971"}
W0513 10:16:54.700381   22028 start.go:107] gopshost.Virtualization returned error: not implemented yet
* minikube v1.10.1 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
I0513 10:16:54.706356   22028 driver.go:253] Setting default libvirt URI to qemu:///system
I0513 10:16:54.819344   22028 docker.go:95] docker version: linux-19.03.8
* Using the docker driver based on user configuration
I0513 10:16:54.821344   22028 start.go:215] selected driver: docker
I0513 10:16:54.821344   22028 start.go:594] validating driver "docker" against <nil>
I0513 10:16:54.821344   22028 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0513 10:16:54.821344   22028 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0513 10:16:54.822345   22028 start_flags.go:217] no existing cluster config was found, will generate one from the flags 
I0513 10:16:54.831377   22028 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 10:16:55.178381   22028 start_flags.go:231] Using suggested 1991MB memory alloc based on sys=16108MB, container=1991MB
I0513 10:16:55.179356   22028 start_flags.go:558] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node minikube in cluster minikube
I0513 10:16:55.181373   22028 cache.go:104] Beginning downloading kic artifacts for docker with docker
* Pulling base image ...
I0513 10:16:55.316379   22028 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 10:16:55.316379   22028 cache.go:110] Downloading gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 to local daemon
I0513 10:16:55.316379   22028 image.go:98] Writing gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 to local daemon
I0513 10:16:55.316379   22028 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 10:16:55.316379   22028 cache.go:48] Caching tarball of preloaded images
I0513 10:16:55.316379   22028 preload.go:122] Found C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0513 10:16:55.316379   22028 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0513 10:16:55.317357   22028 profile.go:156] Saving config to C:\Users\<user>\.minikube\profiles\minikube\config.json ...
I0513 10:16:55.318346   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\config.json: {Name:mkefe1ed68ad1dcc9d856414ff8d3673a072cb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:18.798330   22028 cache.go:132] Successfully downloaded all kic artifacts
I0513 10:19:18.798330   22028 start.go:223] acquiring machines lock for minikube: {Name:mk71de99f9d15522919eee1cb7da11f7d05e4fb9 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0513 10:19:18.798330   22028 start.go:227] acquired machines lock for "minikube" in 0s
I0513 10:19:18.798330   22028 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}
I0513 10:19:18.798330   22028 start.go:104] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=1991MB) ...
I0513 10:19:18.801390   22028 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0513 10:19:18.801390   22028 client.go:161] LocalClient.Create starting
I0513 10:19:18.801390   22028 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\ca.pem
I0513 10:19:18.802337   22028 main.go:110] libmachine: Decoding PEM data...
I0513 10:19:18.802337   22028 main.go:110] libmachine: Parsing certificate...
I0513 10:19:18.802337   22028 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\cert.pem
I0513 10:19:18.802337   22028 main.go:110] libmachine: Decoding PEM data...
I0513 10:19:18.802337   22028 main.go:110] libmachine: Parsing certificate...
I0513 10:19:18.825341   22028 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0513 10:19:18.927372   22028 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0513 10:19:19.013349   22028 oci.go:98] Successfully created a docker volume minikube
I0513 10:19:19.013349   22028 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 10:19:19.013349   22028 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 10:19:19.013349   22028 kic.go:134] Starting extracting preloaded images to volume ...
I0513 10:19:19.022371   22028 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 10:19:19.023371   22028 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0513 10:19:19.435384   22028 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0513 10:19:19.812352   22028 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0513 10:19:20.566973   22028 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:19:20.670970   22028 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:19:20.762977   22028 oci.go:212] the created container "minikube" has a running status.
I0513 10:19:20.762977   22028 kic.go:162] Creating ssh key for kic: C:\Users\<user>\.minikube\machines\minikube\id_rsa...
I0513 10:19:20.860013   22028 kic_runner.go:179] docker (temp): C:\Users\<user>\.minikube\machines\minikube\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0513 10:19:21.082695   22028 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0513 10:19:21.082695   22028 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0513 10:19:48.158701   22028 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (29.1353295s)
I0513 10:19:48.158701   22028 kic.go:139] duration metric: took 29.145352 seconds to extract preloaded images to volume
I0513 10:19:48.174735   22028 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
! Executing "docker inspect minikube --format={{.State.Status}}" took an unusually long time: 2.55554s
* Restarting the docker service may improve performance.
I0513 10:19:50.730275   22028 cli_runner.go:150] Completed: docker inspect minikube --format={{.State.Status}}: (2.55554s)
I0513 10:19:50.730275   22028 machine.go:86] provisioning docker machine ...
I0513 10:19:50.730275   22028 ubuntu.go:166] provisioning hostname "minikube"
I0513 10:19:50.739163   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:50.829161   22028 main.go:110] libmachine: Using SSH client type: native
I0513 10:19:50.829161   22028 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0513 10:19:50.829161   22028 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0513 10:19:51.009432   22028 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0513 10:19:51.018459   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:51.108534   22028 main.go:110] libmachine: Using SSH client type: native
I0513 10:19:51.109513   22028 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0513 10:19:51.109513   22028 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0513 10:19:51.236971   22028 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0513 10:19:51.237970   22028 ubuntu.go:172] set auth options {CertDir:C:\Users\<user>\.minikube CaCertPath:C:\Users\<user>\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\<user>\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\<user>\.minikube\machines\server.pem ServerKeyPath:C:\Users\<user>\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\<user>\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\<user>\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\<user>\.minikube}
I0513 10:19:51.237970   22028 ubuntu.go:174] setting up certificates
I0513 10:19:51.237970   22028 provision.go:82] configureAuth start
I0513 10:19:51.245970   22028 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0513 10:19:51.339134   22028 provision.go:131] copyHostCerts
I0513 10:19:51.339134   22028 exec_runner.go:91] found C:\Users\<user>\.minikube/ca.pem, removing ...
I0513 10:19:51.339134   22028 exec_runner.go:98] cp: C:\Users\<user>\.minikube\certs\ca.pem --> C:\Users\<user>\.minikube/ca.pem (1038 bytes)
I0513 10:19:51.340997   22028 exec_runner.go:91] found C:\Users\<user>\.minikube/cert.pem, removing ...
I0513 10:19:51.340997   22028 exec_runner.go:98] cp: C:\Users\<user>\.minikube\certs\cert.pem --> C:\Users\<user>\.minikube/cert.pem (1078 bytes)
I0513 10:19:51.341970   22028 exec_runner.go:91] found C:\Users\<user>\.minikube/key.pem, removing ...
I0513 10:19:51.342970   22028 exec_runner.go:98] cp: C:\Users\<user>\.minikube\certs\key.pem --> C:\Users\<user>\.minikube/key.pem (1675 bytes)
I0513 10:19:51.346998   22028 provision.go:105] generating server cert: C:\Users\<user>\.minikube\machines\server.pem ca-key=C:\Users\<user>\.minikube\certs\ca.pem private-key=C:\Users\<user>\.minikube\certs\ca-key.pem org=<user>.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0513 10:19:51.483010   22028 provision.go:159] copyRemoteCerts
I0513 10:19:51.495007   22028 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0513 10:19:51.503010   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:51.585194   22028 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 10:19:51.674469   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\machines\server.pem --> /etc/docker/server.pem (1123 bytes)
I0513 10:19:51.702099   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0513 10:19:51.720101   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1038 bytes)
I0513 10:19:51.737889   22028 provision.go:85] duration metric: configureAuth took 499.9191ms
I0513 10:19:51.737889   22028 ubuntu.go:190] setting minikube options for container-runtime
I0513 10:19:51.745925   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:51.843947   22028 main.go:110] libmachine: Using SSH client type: native
I0513 10:19:51.844948   22028 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0513 10:19:51.844948   22028 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0513 10:19:51.990214   22028 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0513 10:19:51.990214   22028 ubuntu.go:71] root file system type: overlay
I0513 10:19:51.991216   22028 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0513 10:19:52.000250   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:52.088353   22028 main.go:110] libmachine: Using SSH client type: native
I0513 10:19:52.088353   22028 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0513 10:19:52.088353   22028 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0513 10:19:52.220277   22028 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0513 10:19:52.228277   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:52.318384   22028 main.go:110] libmachine: Using SSH client type: native
I0513 10:19:52.318384   22028 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32771 <nil> <nil>}
I0513 10:19:52.318384   22028 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0513 10:19:52.992241   22028 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2020-05-13 17:19:52.217398143 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0513 10:19:52.992241   22028 machine.go:89] provisioned docker machine in 2.2619662s
I0513 10:19:52.992241   22028 client.go:164] LocalClient.Create took 34.1908509s
I0513 10:19:52.992241   22028 start.go:145] duration metric: libmachine.API.Create for "minikube" took 34.1908509s
I0513 10:19:52.992241   22028 start.go:186] post-start starting for "minikube" (driver="docker")
I0513 10:19:52.992241   22028 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0513 10:19:53.006241   22028 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0513 10:19:53.014244   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:53.097690   22028 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 10:19:53.226359   22028 ssh_runner.go:148] Run: cat /etc/os-release
I0513 10:19:53.232343   22028 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0513 10:19:53.232343   22028 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0513 10:19:53.232343   22028 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0513 10:19:53.232343   22028 info.go:96] Remote host: Ubuntu 19.10
I0513 10:19:53.232343   22028 filesync.go:118] Scanning C:\Users\<user>\.minikube\addons for local assets ...
I0513 10:19:53.232343   22028 filesync.go:118] Scanning C:\Users\<user>\.minikube\files for local assets ...
I0513 10:19:53.233342   22028 start.go:189] post-start completed in 241.1009ms
I0513 10:19:53.235342   22028 start.go:107] duration metric: createHost completed in 34.4370112s
I0513 10:19:53.235342   22028 start.go:74] releasing machines lock for "minikube", held for 34.4370112s
I0513 10:19:53.243376   22028 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0513 10:19:53.325592   22028 profile.go:156] Saving config to C:\Users\<user>\.minikube\profiles\minikube\config.json ...
I0513 10:19:53.328593   22028 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0513 10:19:53.338631   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:53.344622   22028 ssh_runner.go:148] Run: systemctl --version
I0513 10:19:53.355594   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:19:53.434639   22028 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 10:19:53.449594   22028 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 10:19:53.549331   22028 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0513 10:19:53.562313   22028 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0513 10:19:53.576295   22028 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0513 10:19:53.602331   22028 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0513 10:19:53.674136   22028 ssh_runner.go:148] Run: sudo systemctl start docker
I0513 10:19:53.693146   22028 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
* Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
I0513 10:19:53.896401   22028 cli_runner.go:108] Run: docker exec -t minikube dig +short host.docker.internal
I0513 10:19:54.077574   22028 network.go:57] got host ip for mount in container by digging dns: 192.168.65.2
I0513 10:19:54.077574   22028 start.go:251] checking
I0513 10:19:54.090433   22028 ssh_runner.go:148] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
I0513 10:19:54.095401   22028 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0513 10:19:54.115435   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
  - kubeadm.pod-network-cidr=10.244.0.0/16
I0513 10:19:54.205428   22028 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 10:19:54.206402   22028 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 10:19:54.214402   22028 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0513 10:19:54.255956   22028 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0513 10:19:54.255956   22028 docker.go:317] Images already preloaded, skipping extraction
I0513 10:19:54.264988   22028 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0513 10:19:54.306960   22028 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0513 10:19:54.306960   22028 cache_images.go:69] Images are preloaded, skipping loading
I0513 10:19:54.306960   22028 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.2 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0513 10:19:54.306960   22028 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 172.17.0.2:10249

I0513 10:19:54.315989   22028 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0513 10:19:54.362152   22028 kubeadm.go:737] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0513 10:19:54.376186   22028 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.2
I0513 10:19:54.387388   22028 binaries.go:43] Found k8s binaries, skipping transfer
I0513 10:19:54.401372   22028 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0513 10:19:54.408408   22028 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1458 bytes)
I0513 10:19:54.427374   22028 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (532 bytes)
I0513 10:19:54.445374   22028 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0513 10:19:54.462399   22028 start.go:251] checking
I0513 10:19:54.475408   22028 ssh_runner.go:148] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
I0513 10:19:54.480401   22028 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0513 10:19:54.503407   22028 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0513 10:19:54.570995   22028 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0513 10:19:54.584961   22028 certs.go:52] Setting up C:\Users\<user>\.minikube\profiles\minikube for IP: 172.17.0.2
I0513 10:19:54.584961   22028 certs.go:169] skipping minikubeCA CA generation: C:\Users\<user>\.minikube\ca.key
I0513 10:19:54.584961   22028 certs.go:169] skipping proxyClientCA CA generation: C:\Users\<user>\.minikube\proxy-client-ca.key
I0513 10:19:54.585962   22028 certs.go:267] generating minikube-user signed cert: C:\Users\<user>\.minikube\profiles\minikube\client.key
I0513 10:19:54.585962   22028 crypto.go:69] Generating cert C:\Users\<user>\.minikube\profiles\minikube\client.crt with IP's: []
I0513 10:19:54.695961   22028 crypto.go:157] Writing cert to C:\Users\<user>\.minikube\profiles\minikube\client.crt ...
I0513 10:19:54.695961   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\client.crt: {Name:mk762279d656356d328657ed3ff5ff476401dd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:54.701960   22028 crypto.go:165] Writing key to C:\Users\<user>\.minikube\profiles\minikube\client.key ...
I0513 10:19:54.701960   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\client.key: {Name:mk05d45ecbe1986a628c8c430d55811fe08088f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:54.706960   22028 certs.go:267] generating minikube signed cert: C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f
I0513 10:19:54.706960   22028 crypto.go:69] Generating cert C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0513 10:19:54.810959   22028 crypto.go:157] Writing cert to C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f ...
I0513 10:19:54.810959   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f: {Name:mkf30c903369b0627ccbd028b34e439c6262538b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:54.817292   22028 crypto.go:165] Writing key to C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f ...
I0513 10:19:54.817292   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f: {Name:mkce5570a73f1fe64c6fad4a45f8970673940380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:54.822844   22028 certs.go:278] copying C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f -> C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt
I0513 10:19:54.824842   22028 certs.go:282] copying C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f -> C:\Users\<user>\.minikube\profiles\minikube\apiserver.key
I0513 10:19:54.826844   22028 certs.go:267] generating aggregator signed cert: C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key
I0513 10:19:54.826844   22028 crypto.go:69] Generating cert C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt with IP's: []
I0513 10:19:55.011877   22028 crypto.go:157] Writing cert to C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt ...
I0513 10:19:55.011877   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt: {Name:mk5a9f11f3f7b57801d322dba07701f995c7356f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:55.017858   22028 crypto.go:165] Writing key to C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key ...
I0513 10:19:55.017858   22028 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key: {Name:mk912815cb3875cbdf901f052a75aff368017a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:19:55.022845   22028 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\ca-key.pem (1679 bytes)
I0513 10:19:55.022845   22028 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\ca.pem (1038 bytes)
I0513 10:19:55.022845   22028 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\cert.pem (1078 bytes)
I0513 10:19:55.022845   22028 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\key.pem (1675 bytes)
I0513 10:19:55.023842   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0513 10:19:55.043842   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0513 10:19:55.062841   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0513 10:19:55.084886   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0513 10:19:55.102891   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0513 10:19:55.119884   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0513 10:19:55.138887   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0513 10:19:55.156922   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0513 10:19:55.174887   22028 ssh_runner.go:215] scp C:\Users\<user>\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0513 10:19:55.193887   22028 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0513 10:19:55.225920   22028 ssh_runner.go:148] Run: openssl version
I0513 10:19:55.250921   22028 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0513 10:19:55.273921   22028 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0513 10:19:55.278920   22028 certs.go:383] hashing: -rw-r--r-- 1 root root 1066 May 13 16:57 /usr/share/ca-certificates/minikubeCA.pem
I0513 10:19:55.293887   22028 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0513 10:19:55.316889   22028 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0513 10:19:55.326890   22028 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0513 10:19:55.336896   22028 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0513 10:19:55.395889   22028 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0513 10:19:55.418900   22028 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0513 10:19:55.428900   22028 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0513 10:19:55.442900   22028 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0513 10:19:55.452905   22028 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0513 10:19:55.452905   22028 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0513 10:20:06.554703   22028 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (11.1017983s)
I0513 10:20:06.554703   22028 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0513 10:20:06.572712   22028 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0513 10:20:06.572712   22028 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.2/kubectl label nodes minikube.k8s.io/version=v1.10.1 minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_05_13T10_20_06_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0513 10:20:06.575710   22028 ops.go:35] apiserver oom_adj: -16
I0513 10:20:07.203592   22028 kubeadm.go:868] duration metric: took 648.8887ms to wait for elevateKubeSystemPrivileges.
I0513 10:20:07.216586   22028 kubeadm.go:295] StartCluster complete in 11.8896967s
I0513 10:20:07.216586   22028 settings.go:123] acquiring lock: {Name:mk47b1af55da9543d5dc5a8134d40d87d83e1197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:20:07.216586   22028 settings.go:131] Updating kubeconfig:  C:\Users\<user>/.kube/config
I0513 10:20:07.218585   22028 lock.go:35] WriteFile acquiring C:\Users\<user>/.kube/config: {Name:mkfb29448095b1e10f04ea1bfff92578826b9eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:20:07.224585   22028 addons.go:320] enableAddons start: toEnable=map[], additional=[]
* Verifying Kubernetes components...
I0513 10:20:07.226585   22028 addons.go:50] Setting storage-provisioner=true in profile "minikube"
I0513 10:20:07.226585   22028 addons.go:50] Setting default-storageclass=true in profile "minikube"
I0513 10:20:07.226585   22028 addons.go:126] Setting addon storage-provisioner=true in "minikube"
I0513 10:20:07.226585   22028 addons.go:266] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0513 10:20:07.226585   22028 addons.go:135] addon storage-provisioner should already be in state true
I0513 10:20:07.226585   22028 host.go:65] Checking if "minikube" exists ...
I0513 10:20:07.237584   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0513 10:20:07.250600   22028 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:20:07.251605   22028 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:20:07.353585   22028 api_server.go:47] waiting for apiserver process to appear ...
I0513 10:20:07.357597   22028 addons.go:233] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0513 10:20:07.357597   22028 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0513 10:20:07.367585   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:20:07.370587   22028 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0513 10:20:07.386587   22028 api_server.go:67] duration metric: took 162.0026ms to wait for apiserver process to appear ...
I0513 10:20:07.386587   22028 api_server.go:83] waiting for apiserver healthz status ...
I0513 10:20:07.386587   22028 api_server.go:193] Checking apiserver healthz at https://127.0.0.1:32768/healthz ...
I0513 10:20:07.395586   22028 api_server.go:213] https://127.0.0.1:32768/healthz returned 200:
ok
I0513 10:20:07.397599   22028 api_server.go:136] control plane version: v1.18.2
I0513 10:20:07.397599   22028 api_server.go:126] duration metric: took 11.0122ms to wait for apiserver health ...
I0513 10:20:07.397599   22028 system_pods.go:43] waiting for kube-system pods to appear ...
I0513 10:20:07.397599   22028 addons.go:126] Setting addon default-storageclass=true in "minikube"
W0513 10:20:07.397599   22028 addons.go:135] addon default-storageclass should already be in state true
I0513 10:20:07.398600   22028 host.go:65] Checking if "minikube" exists ...
I0513 10:20:07.406585   22028 system_pods.go:61] 3 kube-system pods found
I0513 10:20:07.406585   22028 system_pods.go:63] "etcd-minikube" [5be5ed19-0cfb-45f6-b689-633429a92100] Pending
I0513 10:20:07.406585   22028 system_pods.go:63] "kube-apiserver-minikube" [22cfbb93-fdb4-4f26-8000-c9af96de0fa6] Pending
I0513 10:20:07.406585   22028 system_pods.go:63] "kube-controller-manager-minikube" [daeac22f-9d37-4064-bef2-5e16b46f0285] Pending
I0513 10:20:07.406585   22028 system_pods.go:74] duration metric: took 8.9858ms to wait for pod list to return data ...
I0513 10:20:07.406585   22028 kubeadm.go:449] duration metric: took 182.0006ms to wait for : map[apiserver:true system_pods:true] ...
I0513 10:20:07.406585   22028 node_conditions.go:99] verifying NodePressure condition ...
I0513 10:20:07.412603   22028 node_conditions.go:111] node storage ephemeral capacity is 65792556Ki
I0513 10:20:07.412603   22028 node_conditions.go:112] node cpu capacity is 2
I0513 10:20:07.412603   22028 node_conditions.go:102] duration metric: took 6.0175ms to run NodePressure ...
I0513 10:20:07.423587   22028 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:20:07.475585   22028 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 10:20:07.522588   22028 addons.go:233] installing /etc/kubernetes/addons/storageclass.yaml
I0513 10:20:07.522588   22028 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0513 10:20:07.532587   22028 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 10:20:07.605585   22028 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0513 10:20:07.628588   22028 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32771 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 10:20:07.753585   22028 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0513 10:20:07.970666   22028 addons.go:322] enableAddons completed in 746.0809ms
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
I0513 10:20:08.135617   22028 start.go:378] kubectl: 1.18.2, cluster: 1.18.2 (minor skew: 0)

Good minikube logs output:

* ==> Docker <==
* -- Logs begin at Wed 2020-05-13 17:19:20 UTC, end at Wed 2020-05-13 17:24:21 UTC. --
* May 13 17:19:21 minikube systemd[1]: Starting Docker Application Container Engine...
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.068129269Z" level=info msg="Starting up"
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.069377669Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.069447069Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.069511369Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.069581669Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.069704769Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00059f930, CONNECTING" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.103821668Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00059f930, READY" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.104984268Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.105015268Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.105052268Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.105060868Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.105169768Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000742e50, CONNECTING" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.105781268Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000742e50, READY" module=grpc
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.134469168Z" level=info msg="Loading containers: start."
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.243078467Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.283632067Z" level=info msg="Loading containers: done."
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.301771766Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.301863066Z" level=info msg="Daemon has completed initialization"
* May 13 17:19:21 minikube systemd[1]: Started Docker Application Container Engine.
* May 13 17:19:21 minikube dockerd[114]: time="2020-05-13T17:19:21.389005266Z" level=info msg="API listen on /run/docker.sock"
* May 13 17:19:52 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
* May 13 17:19:52 minikube systemd[1]: Stopping Docker Application Container Engine...
* May 13 17:19:52 minikube dockerd[114]: time="2020-05-13T17:19:52.594351408Z" level=info msg="Processing signal 'terminated'"
* May 13 17:19:52 minikube dockerd[114]: time="2020-05-13T17:19:52.595187008Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
* May 13 17:19:52 minikube dockerd[114]: time="2020-05-13T17:19:52.595663908Z" level=info msg="Daemon shutdown complete"
* May 13 17:19:52 minikube systemd[1]: docker.service: Succeeded.
* May 13 17:19:52 minikube systemd[1]: Stopped Docker Application Container Engine.
* May 13 17:19:52 minikube systemd[1]: Starting Docker Application Container Engine...
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.681768600Z" level=info msg="Starting up"
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.683789800Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.683820400Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.683838500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.683846300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.683896600Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005a890, CONNECTING" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.690087199Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005a890, READY" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.701478298Z" level=info msg="parsed scheme: \"unix\"" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.701508098Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.701528998Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] }" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.701536398Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.701571598Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005ad70, CONNECTING" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.701868598Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc00005ad70, READY" module=grpc
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.704760898Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.713807297Z" level=info msg="Loading containers: start."
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.807470988Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.845693185Z" level=info msg="Loading containers: done."
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.895257180Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.895314980Z" level=info msg="Daemon has completed initialization"
* May 13 17:19:52 minikube systemd[1]: Started Docker Application Container Engine.
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.991019071Z" level=info msg="API listen on /var/run/docker.sock"
* May 13 17:19:52 minikube dockerd[343]: time="2020-05-13T17:19:52.991096971Z" level=info msg="API listen on [::]:2376"
* May 13 17:20:45 minikube dockerd[343]: time="2020-05-13T17:20:45.249192709Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
* May 13 17:20:45 minikube dockerd[343]: time="2020-05-13T17:20:45.249463009Z" level=warning msg="3e3b13d786a751c75906058821eef71012fa5077f0dea527ca8dad0c9089857c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3e3b13d786a751c75906058821eef71012fa5077f0dea527ca8dad0c9089857c/mounts/shm, flags: 0x2: no such file or directory"
* 
* ==> container status <==
* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
* 33b4d1205ef13       4689081edb103       3 minutes ago       Running             storage-provisioner       1                   d8c4f7dd30427
* ff6d3395bd80a       67da37a9a360e       3 minutes ago       Running             coredns                   0                   61a27e421b680
* a02b816d5e83d       67da37a9a360e       3 minutes ago       Running             coredns                   0                   905a7cce42dd8
* 9cede74769d5d       0d40868643c69       3 minutes ago       Running             kube-proxy                0                   bc6a86963ed87
* 3e3b13d786a75       4689081edb103       3 minutes ago       Exited              storage-provisioner       0                   d8c4f7dd30427
* f19381bc9943d       a3099161e1375       4 minutes ago       Running             kube-scheduler            0                   cc68becb64009
* 76663f88b436f       303ce5db0e90d       4 minutes ago       Running             etcd                      0                   d717a23fb0d55
* 586e0d3bc3b17       6ed75ad404bdd       4 minutes ago       Running             kube-apiserver            0                   97582418cb4e5
* 09bb735530ce2       ace0a8c17ba90       4 minutes ago       Running             kube-controller-manager   0                   9cda91cbaafeb
* 
* ==> coredns [a02b816d5e83] <==
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* I0513 17:20:45.163294       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-13 17:20:24.157112334 +0000 UTC m=+0.106719795) (total time: 21.00612188s):
* Trace[2019727887]: [21.00612188s] [21.00612188s] END
* E0513 17:20:45.163329       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0513 17:20:45.163506       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-13 17:20:24.157111634 +0000 UTC m=+0.106719095) (total time: 21.00638398s):
* Trace[1427131847]: [21.00638398s] [21.00638398s] END
* E0513 17:20:45.163535       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0513 17:20:45.164290       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020C-05-re3 NS7-:1206:.74.159369334 +0000 UTC m=+0.108976695) (total tili: 2x./a04d064,8 go1
* .13.6, dra7e[639984059]: [21.00490198s] [21.00490198s] END
* E0513 17:20:45
* .164321   [  NFO r pflucgtor./gro:ad53:  pkt/imld /wk8i.ionglionn: -"goubv0rn17t2/to
* ols/cac[eI/NrFeO]leptor.gn:/r05a dFyile d tl llisa i*vi1n.E odn:oi"ts:beGene hestp
* s://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* 
* ==> coredns [ff6d3395bd80] <==
* I0513 17:20:45.162648       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-13 17:20:24.157163334 +0000 UTC m=+0.074607297) (total time: 21.00538538s):
* .:53
* [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
* CoreDNS-1.6.7
* linux/amd64, go1.13.6, da7f65b
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* [INFO] plugin/ready: Still waiting on: "kubernetes"
* Trace[2019727887]: [21.00538538s] [21.00538538s] END
* E0513 17:20:45.162882       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0513 17:20:45.162669       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-13 17:20:24.157170034 +0000 UTC m=+0.074613997) (total time: 21.00548398s):
* Trace[939984059]: [21.00548398s] [21.00548398s] END
* E0513 17:20:45.162921       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* I0513 17:20:45.162705       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105 (started: 2020-05-13 17:20:24.158033834 +0000 UTC m=+0.075477697) (total time: 21.00454048s):
* Trace[1427131847]: [21.00454048s] [21.00454048s] END
* E0513 17:20:45.163026       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
* 
* ==> describe nodes <==
* Name:               minikube
* Roles:              master
* Labels:             beta.kubernetes.io/arch=amd64
*                     beta.kubernetes.io/os=linux
*                     kubernetes.io/arch=amd64
*                     kubernetes.io/hostname=minikube
*                     kubernetes.io/os=linux
*                     minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278
*                     minikube.k8s.io/name=minikube
*                     minikube.k8s.io/updated_at=2020_05_13T10_20_06_0700
*                     minikube.k8s.io/version=v1.10.1
*                     node-role.kubernetes.io/master=
* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
*                     node.alpha.kubernetes.io/ttl: 0
*                     volumes.kubernetes.io/controller-managed-attach-detach: true
* CreationTimestamp:  Wed, 13 May 2020 17:20:03 +0000
* Taints:             <none>
* Unschedulable:      false
* Lease:
*   HolderIdentity:  minikube
*   AcquireTime:     <unset>
*   RenewTime:       Wed, 13 May 2020 17:24:16 +0000
* Conditions:
*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
*   ----             ------  -----------------                 ------------------                ------                       -------
*   MemoryPressure   False   Wed, 13 May 2020 17:20:16 +0000   Wed, 13 May 2020 17:20:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
*   DiskPressure     False   Wed, 13 May 2020 17:20:16 +0000   Wed, 13 May 2020 17:20:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
*   PIDPressure      False   Wed, 13 May 2020 17:20:16 +0000   Wed, 13 May 2020 17:20:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
*   Ready            True    Wed, 13 May 2020 17:20:16 +0000   Wed, 13 May 2020 17:20:16 +0000   KubeletReady                 kubelet is posting ready status
* Addresses:
*   InternalIP:  172.17.0.2
*   Hostname:    minikube
* Capacity:
*   cpu:                2
*   ephemeral-storage:  65792556Ki
*   hugepages-1Gi:      0
*   hugepages-2Mi:      0
*   memory:             2039192Ki
*   pods:               110
* Allocatable:
*   cpu:                2
*   ephemeral-storage:  65792556Ki
*   hugepages-1Gi:      0
*   hugepages-2Mi:      0
*   memory:             2039192Ki
*   pods:               110
* System Info:
*   Machine ID:                 d27930bd36034d0186a3f4db6e1f5c0d
*   System UUID:                4584dc1c-a2ba-43fc-b95f-decdf43dd89b
*   Boot ID:                    24918bf2-fbeb-4091-918d-7e1803ae7886
*   Kernel Version:             4.19.76-linuxkit
*   OS Image:                   Ubuntu 19.10
*   Operating System:           linux
*   Architecture:               amd64
*   Container Runtime Version:  docker://19.3.2
*   Kubelet Version:            v1.18.2
*   Kube-Proxy Version:         v1.18.2
* PodCIDR:                      10.244.0.0/24
* PodCIDRs:                     10.244.0.0/24
* Non-terminated Pods:          (8 in total)
*   Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
*   ---------                   ----                                ------------  ----------  ---------------  -------------  ---
*   kube-system                 coredns-66bff467f8-bs2xh            100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m59s
*   kube-system                 coredns-66bff467f8-qcfpx            100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m59s
*   kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
*   kube-system                 kube-apiserver-minikube             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m14s
*   kube-system                 kube-controller-manager-minikube    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
*   kube-system                 kube-proxy-f7gpt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
*   kube-system                 kube-scheduler-minikube             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m14s
*   kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
* Allocated resources:
*   (Total limits may be over 100 percent, i.e., overcommitted.)
*   Resource           Requests    Limits
*   --------           --------    ------
*   cpu                750m (37%)  0 (0%)
*   memory             140Mi (7%)  340Mi (17%)
*   ephemeral-storage  0 (0%)      0 (0%)
*   hugepages-1Gi      0 (0%)      0 (0%)
*   hugepages-2Mi      0 (0%)      0 (0%)
* Events:
*   Type     Reason                   Age    From                  Message
*   ----     ------                   ----   ----                  -------
*   Normal   Starting                 4m15s  kubelet, minikube     Starting kubelet.
*   Normal   NodeHasSufficientMemory  4m15s  kubelet, minikube     Node minikube status is now: NodeHasSufficientMemory
*   Normal   NodeHasNoDiskPressure    4m15s  kubelet, minikube     Node minikube status is now: NodeHasNoDiskPressure
*   Normal   NodeHasSufficientPID     4m15s  kubelet, minikube     Node minikube status is now: NodeHasSufficientPID
*   Normal   NodeAllocatableEnforced  4m15s  kubelet, minikube     Updated Node Allocatable limit across pods
*   Normal   NodeReady                4m5s   kubelet, minikube     Node minikube status is now: NodeReady
*   Warning  readOnlySysFS            3m57s  kube-proxy, minikube  CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)
*   Normal   Starting                 3m57s  kube-proxy, minikube  Starting kube-proxy.
* 
* ==> dmesg <==
* [May13 17:15] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
* [  +0.002957] PCI: Fatal: No config space access function found
* [  +0.017837] PCI: System does not support PCI
* [  +0.049193] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
* [  +0.102112] Unstable clock detected, switching default tracing clock to "global"
*               If you want to keep using the local clock, then add:
*                 "trace_clock=local"
*               on the kernel command line
* [  +0.018127] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* [  +0.001056] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* [May13 17:16] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* [  +0.001283] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
* 
* ==> etcd [76663f88b436] <==
* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
* 2020-05-13 17:20:00.359024 I | etcdmain: etcd Version: 3.4.3
* 2020-05-13 17:20:00.359228 I | etcdmain: Git SHA: 3cf2f69b5
* 2020-05-13 17:20:00.359231 I | etcdmain: Go Version: go1.12.12
* 2020-05-13 17:20:00.359234 I | etcdmain: Go OS/Arch: linux/amd64
* 2020-05-13 17:20:00.359236 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
* [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
* 2020-05-13 17:20:00.361694 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-05-13 17:20:00.365542 I | embed: name = minikube
* 2020-05-13 17:20:00.365556 I | embed: data dir = /var/lib/minikube/etcd
* 2020-05-13 17:20:00.365559 I | embed: member dir = /var/lib/minikube/etcd/member
* 2020-05-13 17:20:00.365561 I | embed: heartbeat = 100ms
* 2020-05-13 17:20:00.365563 I | embed: election = 1000ms
* 2020-05-13 17:20:00.365565 I | embed: snapshot count = 10000
* 2020-05-13 17:20:00.365571 I | embed: advertise client URLs = https://172.17.0.2:2379
* 2020-05-13 17:20:00.424390 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
* raft2020/05/13 17:20:00 INFO: b8e14bda2255bc24 switched to configuration voters=()
* raft2020/05/13 17:20:00 INFO: b8e14bda2255bc24 became follower at term 0
* raft2020/05/13 17:20:00 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
* raft2020/05/13 17:20:00 INFO: b8e14bda2255bc24 became follower at term 1
* raft2020/05/13 17:20:00 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
* 2020-05-13 17:20:00.471737 W | auth: simple token is not cryptographically signed
* 2020-05-13 17:20:00.484157 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
* raft2020/05/13 17:20:00 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
* 2020-05-13 17:20:00.487525 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
* 2020-05-13 17:20:00.487607 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
* 2020-05-13 17:20:00.488235 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
* 2020-05-13 17:20:00.488431 I | embed: listening for metrics on http://127.0.0.1:2381
* 2020-05-13 17:20:00.488576 I | embed: listening for peers on 172.17.0.2:2380
* raft2020/05/13 17:20:01 INFO: b8e14bda2255bc24 is starting a new election at term 1
* raft2020/05/13 17:20:01 INFO: b8e14bda2255bc24 became candidate at term 2
* raft2020/05/13 17:20:01 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
* raft2020/05/13 17:20:01 INFO: b8e14bda2255bc24 became leader at term 2
* raft2020/05/13 17:20:01 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
* 2020-05-13 17:20:01.425745 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
* 2020-05-13 17:20:01.425961 I | embed: ready to serve client requests
* 2020-05-13 17:20:01.426908 I | embed: serving client requests on 172.17.0.2:2379
* 2020-05-13 17:20:01.427401 I | etcdserver: setting up the initial cluster version to 3.4
* 2020-05-13 17:20:01.427504 I | embed: ready to serve client requests
* 2020-05-13 17:20:01.428345 I | embed: serving client requests on 127.0.0.1:2379
* 2020-05-13 17:20:01.433337 N | etcdserver/membership: set the initial cluster version to 3.4
* 2020-05-13 17:20:01.436999 I | etcdserver/api: enabled capabilities for version 3.4
* 
* ==> kernel <==
*  17:24:21 up 8 min,  0 users,  load average: 0.14, 0.20, 0.10
* Linux minikube 4.19.76-linuxkit #1 SMP Fri Apr 3 15:53:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
* PRETTY_NAME="Ubuntu 19.10"
* 
* ==> kube-apiserver [586e0d3bc3b1] <==
* W0513 17:20:02.111847       1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
* W0513 17:20:02.118793       1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
* W0513 17:20:02.129598       1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
* W0513 17:20:02.131889       1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
* W0513 17:20:02.141335       1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
* W0513 17:20:02.154925       1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
* W0513 17:20:02.154945       1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
* I0513 17:20:02.162809       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
* I0513 17:20:02.162827       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
* I0513 17:20:02.164103       1 client.go:361] parsed scheme: "endpoint"
* I0513 17:20:02.164146       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
* I0513 17:20:02.169884       1 client.go:361] parsed scheme: "endpoint"
* I0513 17:20:02.169926       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
* I0513 17:20:03.505722       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0513 17:20:03.505898       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0513 17:20:03.506084       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
* I0513 17:20:03.506135       1 secure_serving.go:178] Serving securely on [::]:8443
* I0513 17:20:03.506286       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* I0513 17:20:03.507425       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
* I0513 17:20:03.507536       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
* I0513 17:20:03.507833       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
* I0513 17:20:03.507893       1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
* I0513 17:20:03.507949       1 available_controller.go:387] Starting AvailableConditionController
* I0513 17:20:03.507988       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
* I0513 17:20:03.508040       1 controller.go:81] Starting OpenAPI AggregationController
* I0513 17:20:03.508394       1 crd_finalizer.go:266] Starting CRDFinalizer
* I0513 17:20:03.508479       1 autoregister_controller.go:141] Starting autoregister controller
* I0513 17:20:03.508535       1 cache.go:32] Waiting for caches to sync for autoregister controller
* I0513 17:20:03.523444       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
* I0513 17:20:03.523469       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
* I0513 17:20:03.523660       1 controller.go:86] Starting OpenAPI controller
* I0513 17:20:03.523671       1 customresource_discovery_controller.go:209] Starting DiscoveryController
* I0513 17:20:03.523678       1 naming_controller.go:291] Starting NamingConditionController
* I0513 17:20:03.523686       1 establishing_controller.go:76] Starting EstablishingController
* I0513 17:20:03.523693       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
* I0513 17:20:03.523700       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
* I0513 17:20:03.523719       1 crdregistration_controller.go:111] Starting crd-autoregister controller
* I0513 17:20:03.523724       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
* E0513 17:20:03.538853       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
* I0513 17:20:03.607725       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
* I0513 17:20:03.608724       1 cache.go:39] Caches are synced for AvailableConditionController controller
* I0513 17:20:03.608744       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
* I0513 17:20:03.608837       1 cache.go:39] Caches are synced for autoregister controller
* I0513 17:20:03.623923       1 shared_informer.go:230] Caches are synced for crd-autoregister 
* I0513 17:20:04.506242       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
* I0513 17:20:04.506359       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
* I0513 17:20:04.512818       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
* I0513 17:20:04.518341       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
* I0513 17:20:04.518384       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
* I0513 17:20:04.861493       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
* I0513 17:20:04.893646       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
* W0513 17:20:05.019881       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
* I0513 17:20:05.020555       1 controller.go:606] quota admission added evaluator for: endpoints
* I0513 17:20:05.028017       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
* I0513 17:20:06.309149       1 controller.go:606] quota admission added evaluator for: serviceaccounts
* I0513 17:20:06.324804       1 controller.go:606] quota admission added evaluator for: deployments.apps
* I0513 17:20:06.540600       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
* I0513 17:20:06.702349       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
* I0513 17:20:22.970383       1 controller.go:606] quota admission added evaluator for: replicasets.apps
* I0513 17:20:23.089579       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
* 
* ==> kube-controller-manager [09bb735530ce] <==
* I0513 17:20:22.669087       1 shared_informer.go:223] Waiting for caches to sync for ReplicationController
* I0513 17:20:22.686387       1 controllermanager.go:533] Started "cronjob"
* I0513 17:20:22.686478       1 cronjob_controller.go:97] Starting CronJob Manager
* I0513 17:20:22.693229       1 controllermanager.go:533] Started "csrsigning"
* I0513 17:20:22.693308       1 certificate_controller.go:119] Starting certificate controller "csrsigning"
* I0513 17:20:22.693317       1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning
* I0513 17:20:22.693333       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
* I0513 17:20:22.771218       1 node_lifecycle_controller.go:78] Sending events to api server
* E0513 17:20:22.771272       1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided
* W0513 17:20:22.771285       1 controllermanager.go:525] Skipping "cloud-node-lifecycle"
* I0513 17:20:22.772585       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
* I0513 17:20:22.782700       1 shared_informer.go:223] Waiting for caches to sync for resource quota
* W0513 17:20:22.816818       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
* I0513 17:20:22.817685       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
* I0513 17:20:22.819868       1 shared_informer.go:230] Caches are synced for HPA 
* I0513 17:20:22.820362       1 shared_informer.go:230] Caches are synced for TTL 
* I0513 17:20:22.825850       1 shared_informer.go:230] Caches are synced for PVC protection 
* I0513 17:20:22.826186       1 shared_informer.go:230] Caches are synced for namespace 
* I0513 17:20:22.828836       1 shared_informer.go:230] Caches are synced for service account 
* I0513 17:20:22.837473       1 shared_informer.go:230] Caches are synced for endpoint_slice 
* I0513 17:20:22.856801       1 shared_informer.go:230] Caches are synced for endpoint 
* I0513 17:20:22.865835       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
* I0513 17:20:22.868587       1 shared_informer.go:230] Caches are synced for node 
* I0513 17:20:22.868637       1 range_allocator.go:172] Starting range CIDR allocator
* I0513 17:20:22.868845       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
* I0513 17:20:22.869077       1 shared_informer.go:230] Caches are synced for cidrallocator 
* I0513 17:20:22.869536       1 shared_informer.go:230] Caches are synced for ReplicationController 
* I0513 17:20:22.873846       1 shared_informer.go:230] Caches are synced for taint 
* I0513 17:20:22.874340       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
* W0513 17:20:22.874598       1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
* I0513 17:20:22.874711       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
* I0513 17:20:22.875078       1 taint_manager.go:187] Starting NoExecuteTaintManager
* I0513 17:20:22.875596       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"43ab6bb6-350b-4faa-ad5a-dbc31f8ff39f", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
* I0513 17:20:22.885014       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
* I0513 17:20:22.888476       1 shared_informer.go:230] Caches are synced for ReplicaSet 
* E0513 17:20:22.889492       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
* I0513 17:20:22.915799       1 shared_informer.go:230] Caches are synced for GC 
* I0513 17:20:22.965199       1 shared_informer.go:230] Caches are synced for deployment 
* I0513 17:20:22.975366       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"fc191eb6-f583-4615-a760-cb36efcb1dcc", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
* I0513 17:20:22.988878       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"badd93de-7982-430b-a4d1-4121eb2275cd", APIVersion:"apps/v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-qcfpx
* I0513 17:20:22.993409       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
* I0513 17:20:22.995128       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"badd93de-7982-430b-a4d1-4121eb2275cd", APIVersion:"apps/v1", ResourceVersion:"378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-bs2xh
* I0513 17:20:23.014907       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
* I0513 17:20:23.083125       1 shared_informer.go:230] Caches are synced for daemon sets 
* I0513 17:20:23.096890       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d3843de2-d525-4029-ba8a-aa8fb26a6579", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-f7gpt
* E0513 17:20:23.111209       1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d3843de2-d525-4029-ba8a-aa8fb26a6579", ResourceVersion:"184", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724987206, loc:(*time.Location)(0x6d07200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001a79540), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001a79560)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001a79580), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0000a9340), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a795a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001a795c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001a79600)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00157cc80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a94298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000399110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0014972c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001a942e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* E0513 17:20:23.125353       1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d3843de2-d525-4029-ba8a-aa8fb26a6579", ResourceVersion:"407", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724987206, loc:(*time.Location)(0x6d07200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001bd35a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bd35c0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001bd35e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001bd3600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bd3620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001c06700), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bd3640), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bd3660), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001bd36a0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001c12820), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016c7e08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00026d0a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001497bc0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016c7e58)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
* I0513 17:20:23.214597       1 shared_informer.go:230] Caches are synced for PV protection 
* I0513 17:20:23.220908       1 shared_informer.go:230] Caches are synced for expand 
* I0513 17:20:23.265460       1 shared_informer.go:230] Caches are synced for persistent volume 
* I0513 17:20:23.267096       1 shared_informer.go:230] Caches are synced for attach detach 
* I0513 17:20:23.316821       1 shared_informer.go:230] Caches are synced for disruption 
* I0513 17:20:23.316835       1 disruption.go:339] Sending events to api server.
* I0513 17:20:23.365524       1 shared_informer.go:230] Caches are synced for stateful set 
* I0513 17:20:23.382904       1 shared_informer.go:230] Caches are synced for resource quota 
* I0513 17:20:23.427144       1 shared_informer.go:230] Caches are synced for resource quota 
* I0513 17:20:23.446042       1 shared_informer.go:230] Caches are synced for job 
* I0513 17:20:23.462651       1 shared_informer.go:230] Caches are synced for garbage collector 
* I0513 17:20:23.462677       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I0513 17:20:23.472755       1 shared_informer.go:230] Caches are synced for garbage collector 
* 
* ==> kube-proxy [9cede74769d5] <==
* W0513 17:20:24.056269       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
* I0513 17:20:24.062158       1 node.go:136] Successfully retrieved node IP: 172.17.0.2
* I0513 17:20:24.062185       1 server_others.go:186] Using iptables Proxier.
* I0513 17:20:24.062508       1 server.go:583] Version: v1.18.2
* I0513 17:20:24.062737       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
* I0513 17:20:24.062755       1 conntrack.go:52] Setting nf_conntrack_max to 131072
* E0513 17:20:24.062965       1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
* I0513 17:20:24.063012       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
* I0513 17:20:24.063035       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
* I0513 17:20:24.065022       1 config.go:315] Starting service config controller
* I0513 17:20:24.065096       1 shared_informer.go:223] Waiting for caches to sync for service config
* I0513 17:20:24.065106       1 config.go:133] Starting endpoints config controller
* I0513 17:20:24.065139       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
* I0513 17:20:24.165231       1 shared_informer.go:230] Caches are synced for service config 
* I0513 17:20:24.165237       1 shared_informer.go:230] Caches are synced for endpoints config 
* 
* ==> kube-scheduler [f19381bc9943] <==
* I0513 17:20:00.663970       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0513 17:20:00.664147       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0513 17:20:01.077448       1 serving.go:313] Generated self-signed cert in-memory
* W0513 17:20:03.563119       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
* W0513 17:20:03.563137       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
* W0513 17:20:03.563143       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
* W0513 17:20:03.563147       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
* I0513 17:20:03.588879       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* I0513 17:20:03.589349       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
* W0513 17:20:03.592884       1 authorization.go:47] Authorization is disabled
* W0513 17:20:03.592900       1 authentication.go:40] Authentication is disabled
* I0513 17:20:03.592908       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
* I0513 17:20:03.594692       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0513 17:20:03.595008       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
* I0513 17:20:03.595139       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
* I0513 17:20:03.595240       1 tlsconfig.go:240] Starting DynamicServingCertificateController
* E0513 17:20:03.599539       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0513 17:20:03.599765       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0513 17:20:03.600744       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
* E0513 17:20:03.600825       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0513 17:20:03.600905       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0513 17:20:03.601061       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0513 17:20:03.601277       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0513 17:20:03.601345       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0513 17:20:03.601401       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0513 17:20:03.601613       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
* E0513 17:20:03.602651       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* E0513 17:20:03.603544       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
* E0513 17:20:03.605614       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
* E0513 17:20:03.605655       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
* E0513 17:20:03.606749       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
* E0513 17:20:03.607718       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
* E0513 17:20:03.609640       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
* E0513 17:20:03.611453       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
* I0513 17:20:05.996038       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
* I0513 17:20:06.695397       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
* I0513 17:20:06.703687       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
* 
* ==> kubelet <==
* -- Logs begin at Wed 2020-05-13 17:19:20 UTC, end at Wed 2020-05-13 17:24:22 UTC. --
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.781952    1857 kubelet.go:1821] Starting kubelet main sync loop.
* May 13 17:20:06 minikube kubelet[1857]: E0513 17:20:06.781978    1857 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.850263    1857 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.869521    1857 kubelet_node_status.go:70] Attempting to register node minikube
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.882362    1857 kubelet_node_status.go:112] Node minikube was previously registered
* May 13 17:20:06 minikube kubelet[1857]: E0513 17:20:06.882454    1857 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.882802    1857 kubelet_node_status.go:73] Successfully registered node minikube
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.964147    1857 cpu_manager.go:184] [cpumanager] starting with none policy
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.964171    1857 cpu_manager.go:185] [cpumanager] reconciling every 10s
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.964261    1857 state_mem.go:36] [cpumanager] initializing new in-memory state store
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.964685    1857 state_mem.go:88] [cpumanager] updated default cpuset: ""
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.964693    1857 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.964714    1857 policy_none.go:43] [cpumanager] none policy: Start
* May 13 17:20:06 minikube kubelet[1857]: I0513 17:20:06.965713    1857 plugin_manager.go:114] Starting Kubelet Plugin Manager
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.082742    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.090717    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.097172    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.102547    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159706    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-certs") pod "etcd-minikube" (UID: "ca02679f24a416493e1c288b16539a55")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159733    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-data") pod "etcd-minikube" (UID: "ca02679f24a416493e1c288b16539a55")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159747    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/a4e4dc2bb0e7672fde01b5c790ce190f-ca-certs") pod "kube-apiserver-minikube" (UID: "a4e4dc2bb0e7672fde01b5c790ce190f")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159819    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/a4e4dc2bb0e7672fde01b5c790ce190f-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "a4e4dc2bb0e7672fde01b5c790ce190f")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159835    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/a4e4dc2bb0e7672fde01b5c790ce190f-k8s-certs") pod "kube-apiserver-minikube" (UID: "a4e4dc2bb0e7672fde01b5c790ce190f")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159864    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-ca-certs") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159879    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159892    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159905    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/a4e4dc2bb0e7672fde01b5c790ce190f-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "a4e4dc2bb0e7672fde01b5c790ce190f")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159915    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159936    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159978    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-k8s-certs") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.159991    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/7f415a35d57cff5428871c5a51313bd5-kubeconfig") pod "kube-controller-manager-minikube" (UID: "7f415a35d57cff5428871c5a51313bd5")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.160001    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/155707e0c19147c8dc5e997f089c0ad1-kubeconfig") pod "kube-scheduler-minikube" (UID: "155707e0c19147c8dc5e997f089c0ad1")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.160013    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/a4e4dc2bb0e7672fde01b5c790ce190f-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "a4e4dc2bb0e7672fde01b5c790ce190f")
* May 13 17:20:07 minikube kubelet[1857]: I0513 17:20:07.160017    1857 reconciler.go:157] Reconciler: start to sync state
* May 13 17:20:22 minikube kubelet[1857]: I0513 17:20:22.895878    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:22 minikube kubelet[1857]: I0513 17:20:22.928394    1857 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.244.0.0/24
* May 13 17:20:22 minikube kubelet[1857]: I0513 17:20:22.928792    1857 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
* May 13 17:20:22 minikube kubelet[1857]: I0513 17:20:22.929832    1857 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
* May 13 17:20:22 minikube kubelet[1857]: I0513 17:20:22.994046    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.003283    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.036848    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/3d43a46f-2df2-4cc0-86ff-d59fb76fba7c-tmp") pod "storage-provisioner" (UID: "3d43a46f-2df2-4cc0-86ff-d59fb76fba7c")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.036875    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-vvplt" (UniqueName: "kubernetes.io/secret/3d43a46f-2df2-4cc0-86ff-d59fb76fba7c-storage-provisioner-token-vvplt") pod "storage-provisioner" (UID: "3d43a46f-2df2-4cc0-86ff-d59fb76fba7c")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.106431    1857 topology_manager.go:233] [topologymanager] Topology Admit Handler
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.137340    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4w6xn" (UniqueName: "kubernetes.io/secret/1c81bedd-170d-44c0-bf9a-dfb0c508431b-coredns-token-4w6xn") pod "coredns-66bff467f8-bs2xh" (UID: "1c81bedd-170d-44c0-bf9a-dfb0c508431b")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.137656    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1c81bedd-170d-44c0-bf9a-dfb0c508431b-config-volume") pod "coredns-66bff467f8-bs2xh" (UID: "1c81bedd-170d-44c0-bf9a-dfb0c508431b")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.137760    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8bb3b0cf-f95e-4f95-8f5e-7980e2d4e198-config-volume") pod "coredns-66bff467f8-qcfpx" (UID: "8bb3b0cf-f95e-4f95-8f5e-7980e2d4e198")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.137800    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-4w6xn" (UniqueName: "kubernetes.io/secret/8bb3b0cf-f95e-4f95-8f5e-7980e2d4e198-coredns-token-4w6xn") pod "coredns-66bff467f8-qcfpx" (UID: "8bb3b0cf-f95e-4f95-8f5e-7980e2d4e198")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.237977    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ee30d1e7-e342-4ac0-a207-9dac2ce0fb90-kube-proxy") pod "kube-proxy-f7gpt" (UID: "ee30d1e7-e342-4ac0-a207-9dac2ce0fb90")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.238046    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-p2xnq" (UniqueName: "kubernetes.io/secret/ee30d1e7-e342-4ac0-a207-9dac2ce0fb90-kube-proxy-token-p2xnq") pod "kube-proxy-f7gpt" (UID: "ee30d1e7-e342-4ac0-a207-9dac2ce0fb90")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.238079    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/ee30d1e7-e342-4ac0-a207-9dac2ce0fb90-xtables-lock") pod "kube-proxy-f7gpt" (UID: "ee30d1e7-e342-4ac0-a207-9dac2ce0fb90")
* May 13 17:20:23 minikube kubelet[1857]: I0513 17:20:23.238104    1857 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/ee30d1e7-e342-4ac0-a207-9dac2ce0fb90-lib-modules") pod "kube-proxy-f7gpt" (UID: "ee30d1e7-e342-4ac0-a207-9dac2ce0fb90")
* May 13 17:20:23 minikube kubelet[1857]: W0513 17:20:23.833916    1857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-qcfpx through plugin: invalid network status for
* May 13 17:20:23 minikube kubelet[1857]: W0513 17:20:23.893869    1857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-qcfpx through plugin: invalid network status for
* May 13 17:20:23 minikube kubelet[1857]: E0513 17:20:23.894815    1857 remote_runtime.go:295] ContainerStatus "a02b816d5e83d31175737ac8f221da853724cdd994f908a5c634aec4c230dad1" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: a02b816d5e83d31175737ac8f221da853724cdd994f908a5c634aec4c230dad1
* May 13 17:20:23 minikube kubelet[1857]: E0513 17:20:23.894923    1857 kuberuntime_manager.go:952] getPodContainerStatuses for pod "coredns-66bff467f8-qcfpx_kube-system(8bb3b0cf-f95e-4f95-8f5e-7980e2d4e198)" failed: rpc error: code = Unknown desc = Error: No such container: a02b816d5e83d31175737ac8f221da853724cdd994f908a5c634aec4c230dad1
* May 13 17:20:23 minikube kubelet[1857]: W0513 17:20:23.932653    1857 pod_container_deletor.go:77] Container "61a27e421b6807253601da55933977857d8de69ba2b1463909b966603d06056a" not found in pod's containers
* May 13 17:20:23 minikube kubelet[1857]: W0513 17:20:23.935483    1857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-bs2xh through plugin: invalid network status for
* May 13 17:20:24 minikube kubelet[1857]: W0513 17:20:24.939587    1857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-bs2xh through plugin: invalid network status for
* May 13 17:20:24 minikube kubelet[1857]: W0513 17:20:24.944864    1857 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-qcfpx through plugin: invalid network status for
* May 13 17:20:46 minikube kubelet[1857]: I0513 17:20:46.075930    1857 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3e3b13d786a751c75906058821eef71012fa5077f0dea527ca8dad0c9089857c
* 
* ==> storage-provisioner [33b4d1205ef1] <==
* 
* ==> storage-provisioner [3e3b13d786a7] <==
* F0513 17:20:45.157249       1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: getsockopt: connection refused

Good docker logs minikube output from host:

INFO: ensuring we can execute /bin/mount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: fix cgroup mounts for all subsystems
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
Failed to find module 'autofs4'
systemd 242 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization docker.
Detected architecture x86-64.
Failed to create symlink /sys/fs/cgroup/net_prio: File exists
Failed to create symlink /sys/fs/cgroup/net_cls: File exists
Failed to create symlink /sys/fs/cgroup/cpuacct: File exists
Failed to create symlink /sys/fs/cgroup/cpu: File exists

Welcome to Ubuntu 19.10!

Set hostname to <minikube>.
Failed to bump fs.file-max, ignoring: Invalid argument
/lib/systemd/system/docker.socket:5: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
[  OK  ] Listening on Journal Audit Socket.
[UNSUPP] Starting of Arbitrary Exec…Automount Point not supported.
[  OK  ] Reached target Network is Online.
[  OK  ] Reached target Slices.
[  OK  ] Started Dispatch Password …ts to Console Directory Watch.
[  OK  ] Reached target Paths.
[  OK  ] Reached target Swap.
[  OK  ] Listening on Journal Socket.
         Mounting Kernel Debug File System...
         Starting Create list of re…odes for the current kernel...
         Starting Remount Root and Kernel File Systems...
         Mounting Huge Pages File System...
[  OK  ] Listening on Journal Socket (/dev/log).
         Starting Journal Service...
[  OK  ] Reached target Local Encrypted Volumes.
         Starting Apply Kernel Variables...
         Mounting FUSE Control File System...
[  OK  ] Started Remount Root and Kernel File Systems.
         Starting Create System Users...
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Create list of req… nodes for the current kernel.
[  OK  ] Mounted Kernel Debug File System.
[  OK  ] Mounted Huge Pages File System.
[  OK  ] Mounted FUSE Control File System.
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Started Apply Kernel Variables.
[  OK  ] Started Create System Users.
         Starting Create Static Device Nodes in /dev...
[  OK  ] Started Create Static Device Nodes in /dev.
[  OK  ] Reached target Local File Systems (Pre).
[  OK  ] Reached target Local File Systems.
[  OK  ] Started Journal Service.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Reached target System Initialization.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target Timers.
         Starting Docker Socket for the API.
[  OK  ] Listening on Docker Socket for the API.
[  OK  ] Reached target Sockets.
[  OK  ] Reached target Basic System.
         Starting containerd container runtime...
         Starting OpenBSD Secure Shell server...
[  OK  ] Started Flush Journal to Persistent Storage.
[  OK  ] Started containerd container runtime.
         Starting Docker Application Container Engine...
[  OK  ] Started OpenBSD Secure Shell server.
[  OK  ] Started Docker Application Container Engine.
[  OK  ] Reached target Multi-User System.
[  OK  ] Reached target Graphical Interface.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.

Bad minikube start --alsologtostderr logs:

I0513 10:25:52.191638   22848 start.go:99] hostinfo: {"hostname":"<system-hostname>","uptime":79090,"bootTime":1589311662,"procs":277,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.18362 Build 18362","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"2ff1be69-d9b0-46b2-b9e2-f8e389f49971"}
W0513 10:25:52.191638   22848 start.go:107] gopshost.Virtualization returned error: not implemented yet
* minikube v1.10.1 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
I0513 10:25:52.197635   22848 driver.go:253] Setting default libvirt URI to qemu:///system
I0513 10:25:52.315640   22848 docker.go:95] docker version: linux-19.03.8
* Using the docker driver based on user configuration
I0513 10:25:52.317639   22848 start.go:215] selected driver: docker
I0513 10:25:52.317639   22848 start.go:594] validating driver "docker" against <nil>
I0513 10:25:52.317639   22848 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0513 10:25:52.317639   22848 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0513 10:25:52.317639   22848 start_flags.go:217] no existing cluster config was found, will generate one from the flags 
I0513 10:25:52.325635   22848 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 10:25:52.671637   22848 start_flags.go:231] Using suggested 1991MB memory alloc based on sys=16108MB, container=1991MB
I0513 10:25:52.671637   22848 start_flags.go:558] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node minikube in cluster minikube
I0513 10:25:52.674636   22848 cache.go:104] Beginning downloading kic artifacts for docker with docker
I0513 10:25:52.766635   22848 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0513 10:25:52.766635   22848 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 10:25:52.766635   22848 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 10:25:52.766635   22848 cache.go:48] Caching tarball of preloaded images
I0513 10:25:52.766635   22848 preload.go:122] Found C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0513 10:25:52.766635   22848 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0513 10:25:52.766635   22848 profile.go:156] Saving config to C:\Users\<user>\.minikube\profiles\minikube\config.json ...
I0513 10:25:52.767619   22848 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\config.json: {Name:mkefe1ed68ad1dcc9d856414ff8d3673a072cb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 10:25:52.769602   22848 cache.go:132] Successfully downloaded all kic artifacts
I0513 10:25:52.769602   22848 start.go:223] acquiring machines lock for minikube: {Name:mk71de99f9d15522919eee1cb7da11f7d05e4fb9 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0513 10:25:52.769602   22848 start.go:227] acquired machines lock for "minikube" in 0s
I0513 10:25:52.769602   22848 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}
I0513 10:25:52.769602   22848 start.go:104] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=1991MB) ...
I0513 10:25:52.772636   22848 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0513 10:25:52.772636   22848 client.go:161] LocalClient.Create starting
I0513 10:25:52.772636   22848 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\ca.pem
I0513 10:25:52.773612   22848 main.go:110] libmachine: Decoding PEM data...
I0513 10:25:52.773612   22848 main.go:110] libmachine: Parsing certificate...
I0513 10:25:52.773612   22848 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\cert.pem
I0513 10:25:52.773612   22848 main.go:110] libmachine: Decoding PEM data...
I0513 10:25:52.773612   22848 main.go:110] libmachine: Parsing certificate...
I0513 10:25:52.795599   22848 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0513 10:25:52.888638   22848 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0513 10:25:52.980757   22848 oci.go:98] Successfully created a docker volume minikube
I0513 10:25:52.980757   22848 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 10:25:52.980757   22848 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 10:25:52.980757   22848 kic.go:134] Starting extracting preloaded images to volume ...
I0513 10:25:52.989603   22848 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 10:25:52.989603   22848 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0513 10:25:53.434942   22848 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0513 10:25:53.877959   22848 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0513 10:25:54.612141   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:54.736134   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:25:55.333796   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:55.481765   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:55.627763   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:55.796765   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:55.970764   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:58.453092   22848 cli_runner.go:150] Completed: docker inspect minikube --format={{.State.Running}}: (2.4823286s)
I0513 10:25:58.584629   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:58.803618   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:59.094294   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:59.407551   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:25:59.968447   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:00.843002   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:02.442083   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:02.643039   22848 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (9.6534361s)
I0513 10:26:02.643039   22848 kic.go:139] duration metric: took 9.662282 seconds to extract preloaded images to volume
I0513 10:26:03.750383   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:06.073563   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:09.279737   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:09.453746   22848 client.go:164] LocalClient.Create took 16.6811099s
I0513 10:26:11.454148   22848 start.go:107] duration metric: createHost completed in 18.684546s
I0513 10:26:11.454148   22848 start.go:74] releasing machines lock for "minikube", held for 18.684546s
I0513 10:26:11.498173   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:26:11.581284   22848 stop.go:36] StopHost: minikube
* Stopping "minikube" in docker ...
I0513 10:26:11.600211   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:26:11.686511   22848 stop.go:76] host is in state Stopped
I0513 10:26:11.686511   22848 main.go:110] libmachine: Stopping "minikube"...
I0513 10:26:11.704209   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:26:11.784545   22848 stop.go:56] stop err: Machine "minikube" is already stopped.
I0513 10:26:11.784545   22848 stop.go:59] host is already stopped
* Deleting "minikube" in docker ...
I0513 10:26:12.803752   22848 cli_runner.go:108] Run: docker inspect -f {{.Id}} minikube
I0513 10:26:12.897225   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:26:12.988224   22848 cli_runner.go:108] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0513 10:26:13.075276   22848 oci.go:544] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:

stderr:
Error response from daemon: Container 95bd470227ed54b28d8f1b8795b2f73e056a2627b0a53c18e80575e5db185ded is not running
I0513 10:26:14.089138   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:26:14.173362   22848 oci.go:552] container minikube status is Stopped
I0513 10:26:14.173362   22848 oci.go:564] Successfully shutdown container minikube
I0513 10:26:14.181281   22848 cli_runner.go:108] Run: docker rm -f -v minikube
I0513 10:26:14.284288   22848 cli_runner.go:108] Run: docker inspect -f {{.Id}} minikube
! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
I0513 10:26:19.374055   22848 start.go:223] acquiring machines lock for minikube: {Name:mk71de99f9d15522919eee1cb7da11f7d05e4fb9 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0513 10:26:19.374055   22848 start.go:227] acquired machines lock for "minikube" in 0s
* Creating docker container (CPUs=2, Memory=1991MB) ...
I0513 10:26:19.374055   22848 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:1991 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}
I0513 10:26:19.374055   22848 start.go:104] createHost starting for "" (driver="docker")
I0513 10:26:19.379129   22848 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0513 10:26:19.379129   22848 client.go:161] LocalClient.Create starting
I0513 10:26:19.379129   22848 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\ca.pem
I0513 10:26:19.380076   22848 main.go:110] libmachine: Decoding PEM data...
I0513 10:26:19.380076   22848 main.go:110] libmachine: Parsing certificate...
I0513 10:26:19.380076   22848 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\cert.pem
I0513 10:26:19.381072   22848 main.go:110] libmachine: Decoding PEM data...
I0513 10:26:19.381072   22848 main.go:110] libmachine: Parsing certificate...
I0513 10:26:19.403067   22848 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0513 10:26:19.495104   22848 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0513 10:26:19.579263   22848 oci.go:98] Successfully created a docker volume minikube
I0513 10:26:19.579263   22848 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 10:26:19.579263   22848 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 10:26:19.579263   22848 kic.go:134] Starting extracting preloaded images to volume ...
I0513 10:26:19.588106   22848 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 10:26:19.588106   22848 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0513 10:26:19.968084   22848 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0513 10:26:20.383643   22848 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=1991mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0513 10:26:21.075679   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:21.197664   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 10:26:21.759451   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:21.897477   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:22.037481   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:22.212999   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:22.373994   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:22.557745   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:22.801267   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:23.157476   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:23.739195   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:24.254850   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:24.972877   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:26.506551   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:27.825801   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:29.602336   22848 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (10.0142297s)
I0513 10:26:29.602336   22848 kic.go:139] duration metric: took 10.023073 seconds to extract preloaded images to volume
I0513 10:26:31.431212   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:33.926288   22848 cli_runner.go:150] Completed: docker inspect minikube --format={{.State.Running}}: (2.4950761s)
I0513 10:26:38.481272   22848 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 10:26:38.567293   22848 client.go:164] LocalClient.Create took 19.1881639s
I0513 10:26:40.568586   22848 start.go:107] duration metric: createHost completed in 21.194531s
I0513 10:26:40.568586   22848 start.go:74] releasing machines lock for "minikube", held for 21.194531s
* Failed to start docker container. "minikube start" may fix it: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
I0513 10:26:40.568586   22848 exit.go:58] WithError(error provisioning host)=Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet called from:
goroutine 1 [running]:
runtime/debug.Stack(0x40acf1, 0x18d3660, 0x18b8300)
	/usr/local/go/src/runtime/debug/stack.go:24 +0xa4
k8s.io/minikube/pkg/minikube/exit.WithError(0x1b3f8de, 0x17, 0x1dfc340, 0xc000102480)
	/app/pkg/minikube/exit/exit.go:58 +0x3b
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b53760, 0xc00000b570, 0x0, 0x1)
	/app/cmd/minikube/cmd/start.go:170 +0xac9
github.com/spf13/cobra.(*Command).execute(0x2b53760, 0xc00000b560, 0x1, 0x1, 0x2b53760, 0xc00000b560)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2b1
github.com/spf13/cobra.(*Command).ExecuteC(0x2b527a0, 0x0, 0x0, 0xc000403001)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x350
github.com/spf13/cobra.(*Command).Execute(...)
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
	/app/cmd/minikube/cmd/root.go:112 +0x6f5
main.main()
	/app/cmd/minikube/main.go:66 +0xf1
W0513 10:26:40.569301   22848 out.go:201] error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
* 
X error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
* 
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

Bad docker logs minikube output from host:

INFO: ensuring we can execute /bin/mount even with userns-remap
INFO: remounting /sys read-only
INFO: making mounts shared
INFO: fix cgroup mounts for all subsystems
INFO: clearing and regenerating /etc/machine-id
Initializing machine ID from random generator.
INFO: faking /sys/class/dmi/id/product_name to be "kind"
INFO: faking /sys/class/dmi/id/product_uuid to be random
INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
INFO: setting iptables to detected mode: legacy
update-alternatives: error: no alternatives for iptables

@medyagh
Copy link
Member

medyagh commented May 13, 2020

@plnordquist minikube needs min 2GB ram, and your docker desktop has only 2 GB ram shared across all docker containers, I suggest resize your docker desktop to have at least 4 or 6GB ram and try again.
do you mind checking if that will fix the problem?

plus the logs from your second start shows that the start was actually healthy !

* Done! kubectl is now configured to use "minikube"

@plnordquist
Copy link
Author

plnordquist commented May 14, 2020

OK I updated the ram for Docker Desktop to 6GB and that didn't work. I found another issue like this one at #7885 and then went down a rabbit hole trying to figure this out since factory resetting Docker Desktop didn't work 100% of the time.

The behavior that works is if I clear the shared preloaded tarball from the file sharing configuration of Docker Desktop and then cancel the request to share the tarball to the container, minikube can start successfully since it uses scp later in the process to copy the tarball into the running container.

I think this might have started in commit b509d691 since it changed the order of operations. Prior to that commit the main minikube container would have been started and the minikube volume would be initialized from the /var directory of the kicbase image. In that commit, the order of operations changes and now there is an asynchronous race to initialize the minikube volume from either /var or /extractDir and that is why when I don't immediately allow docker to share the preloaded tarball or cancel the sharing request the process succeeds where if the tarball is previously shared it will fail.

In the current version of the code beyond the commit referenced above, the Podman driver doesn't even attempt to cache the preloaded images so it avoids this issue and thus #7885 no longer applies.

I've attached a set of logs that show the error that appears when I cancel the file sharing request, it is not fatal to the minikube start command and the cluster successfully starts.

Good minikube start --alsologtostderr logs where sharing is cancelled:

I0513 18:44:11.004430   12624 start.go:99] hostinfo: {"hostname":"<system-hostname>","uptime":108989,"bootTime":1589311661,"procs":300,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.18362 Build 18362","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"2ff1be69-d9b0-46b2-b9e2-f8e389f49971"}
W0513 18:44:11.004430   12624 start.go:107] gopshost.Virtualization returned error: not implemented yet
* minikube v1.10.1 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
I0513 18:44:11.012461   12624 driver.go:253] Setting default libvirt URI to qemu:///system
I0513 18:44:11.430462   12624 docker.go:95] docker version: linux-19.03.8
* Using the docker driver based on user configuration
I0513 18:44:11.433461   12624 start.go:215] selected driver: docker
I0513 18:44:11.433461   12624 start.go:594] validating driver "docker" against <nil>
I0513 18:44:11.433461   12624 start.go:600] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0513 18:44:11.433461   12624 start.go:917] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0513 18:44:11.434426   12624 start_flags.go:217] no existing cluster config was found, will generate one from the flags 
I0513 18:44:11.442464   12624 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 18:44:13.078276   12624 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (1.6358105s)
I0513 18:44:13.078276   12624 start_flags.go:231] Using suggested 3892MB memory alloc based on sys=16108MB, container=3940MB
I0513 18:44:13.078276   12624 start_flags.go:558] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node minikube in cluster minikube
I0513 18:44:13.081243   12624 cache.go:104] Beginning downloading kic artifacts for docker with docker
I0513 18:44:13.475278   12624 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0513 18:44:13.475278   12624 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 18:44:13.475278   12624 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 18:44:13.475278   12624 cache.go:48] Caching tarball of preloaded images
I0513 18:44:13.475278   12624 preload.go:122] Found C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0513 18:44:13.475278   12624 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0513 18:44:13.475278   12624 profile.go:156] Saving config to C:\Users\<user>\.minikube\profiles\minikube\config.json ...
I0513 18:44:13.476231   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\config.json: {Name:mkefe1ed68ad1dcc9d856414ff8d3673a072cb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:13.478229   12624 cache.go:132] Successfully downloaded all kic artifacts
I0513 18:44:13.478229   12624 start.go:223] acquiring machines lock for minikube: {Name:mk71de99f9d15522919eee1cb7da11f7d05e4fb9 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0513 18:44:13.479231   12624 start.go:227] acquired machines lock for "minikube" in 0s
I0513 18:44:13.479231   12624 start.go:83] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3892 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}
I0513 18:44:13.479231   12624 start.go:104] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=3892MB) ...
I0513 18:44:13.484233   12624 start.go:140] libmachine.API.Create for "minikube" (driver="docker")
I0513 18:44:13.484233   12624 client.go:161] LocalClient.Create starting
I0513 18:44:13.484233   12624 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\ca.pem
I0513 18:44:13.484233   12624 main.go:110] libmachine: Decoding PEM data...
I0513 18:44:13.484233   12624 main.go:110] libmachine: Parsing certificate...
I0513 18:44:13.484233   12624 main.go:110] libmachine: Reading certificate data from C:\Users\<user>\.minikube\certs\cert.pem
I0513 18:44:13.484233   12624 main.go:110] libmachine: Decoding PEM data...
I0513 18:44:13.484233   12624 main.go:110] libmachine: Parsing certificate...
I0513 18:44:13.506264   12624 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0513 18:44:13.908265   12624 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0513 18:44:14.309230   12624 oci.go:98] Successfully created a docker volume minikube
I0513 18:44:14.309230   12624 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 18:44:14.309230   12624 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 18:44:14.309230   12624 kic.go:134] Starting extracting preloaded images to volume ...
I0513 18:44:14.318229   12624 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0513 18:44:14.319231   12624 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0513 18:44:15.983168   12624 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (1.6649373s)
I0513 18:44:15.991180   12624 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0513 18:44:17.600346   12624 cli_runner.go:150] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.6091646s)
I0513 18:44:17.609192   12624 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=3892mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0513 18:44:18.387010   12624 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Running}}
I0513 18:44:18.808566   12624 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 18:44:19.198598   12624 oci.go:212] the created container "minikube" has a running status.
I0513 18:44:19.198598   12624 kic.go:162] Creating ssh key for kic: C:\Users\<user>\.minikube\machines\minikube\id_rsa...
I0513 18:44:19.302598   12624 kic_runner.go:179] docker (temp): C:\Users\<user>\.minikube\machines\minikube\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0513 18:44:19.843200   12624 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0513 18:44:19.843200   12624 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0513 18:44:20.438871   12624 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (6.1196354s)
I0513 18:44:20.438871   12624 kic.go:137] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: exit status 125
stdout:

stderr:
docker: Error response from daemon: status code not OK but 500: {"Message":"Unhandled exception: Filesharing has been cancelled","StackTrace":"   at Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() in C:\\workspaces\\edge-2.2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 0\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() in C:\\workspaces\\edge-2.2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:line 47\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() in C:\\workspaces\\edge-2.2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:line 21\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n   at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}.
See 'docker run --help'.
I0513 18:44:20.455871   12624 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 18:44:20.841407   12624 machine.go:86] provisioning docker machine ...
I0513 18:44:20.841407   12624 ubuntu.go:166] provisioning hostname "minikube"
I0513 18:44:20.850357   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:21.289782   12624 main.go:110] libmachine: Using SSH client type: native
I0513 18:44:21.293786   12624 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0513 18:44:21.293786   12624 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0513 18:44:21.431792   12624 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0513 18:44:21.440819   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:21.845808   12624 main.go:110] libmachine: Using SSH client type: native
I0513 18:44:21.845808   12624 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0513 18:44:21.845808   12624 main.go:110] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0513 18:44:21.972809   12624 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0513 18:44:21.972809   12624 ubuntu.go:172] set auth options {CertDir:C:\Users\<user>\.minikube CaCertPath:C:\Users\<user>\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\<user>\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\<user>\.minikube\machines\server.pem ServerKeyPath:C:\Users\<user>\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\<user>\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\<user>\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\<user>\.minikube}
I0513 18:44:21.972809   12624 ubuntu.go:174] setting up certificates
I0513 18:44:21.972809   12624 provision.go:82] configureAuth start
I0513 18:44:21.982832   12624 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0513 18:44:22.383007   12624 provision.go:131] copyHostCerts
I0513 18:44:22.383007   12624 exec_runner.go:91] found C:\Users\<user>\.minikube/ca.pem, removing ...
I0513 18:44:22.383931   12624 exec_runner.go:98] cp: C:\Users\<user>\.minikube\certs\ca.pem --> C:\Users\<user>\.minikube/ca.pem (1038 bytes)
I0513 18:44:22.385930   12624 exec_runner.go:91] found C:\Users\<user>\.minikube/cert.pem, removing ...
I0513 18:44:22.386935   12624 exec_runner.go:98] cp: C:\Users\<user>\.minikube\certs\cert.pem --> C:\Users\<user>\.minikube/cert.pem (1078 bytes)
I0513 18:44:22.387961   12624 exec_runner.go:91] found C:\Users\<user>\.minikube/key.pem, removing ...
I0513 18:44:22.388929   12624 exec_runner.go:98] cp: C:\Users\<user>\.minikube\certs\key.pem --> C:\Users\<user>\.minikube/key.pem (1675 bytes)
I0513 18:44:22.389930   12624 provision.go:105] generating server cert: C:\Users\<user>\.minikube\machines\server.pem ca-key=C:\Users\<user>\.minikube\certs\ca.pem private-key=C:\Users\<user>\.minikube\certs\ca-key.pem org=<user>.minikube san=[172.17.0.2 localhost 127.0.0.1]
I0513 18:44:22.528930   12624 provision.go:159] copyRemoteCerts
I0513 18:44:22.540962   12624 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0513 18:44:22.548962   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:22.959748   12624 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 18:44:23.048966   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1038 bytes)
I0513 18:44:23.069017   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\machines\server.pem --> /etc/docker/server.pem (1123 bytes)
I0513 18:44:23.088017   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0513 18:44:23.106017   12624 provision.go:85] duration metric: configureAuth took 1.1332067s
I0513 18:44:23.106017   12624 ubuntu.go:190] setting minikube options for container-runtime
I0513 18:44:23.114052   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:23.522994   12624 main.go:110] libmachine: Using SSH client type: native
I0513 18:44:23.522994   12624 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0513 18:44:23.522994   12624 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0513 18:44:23.660063   12624 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0513 18:44:23.660063   12624 ubuntu.go:71] root file system type: overlay
I0513 18:44:23.660063   12624 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0513 18:44:23.673089   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:24.085273   12624 main.go:110] libmachine: Using SSH client type: native
I0513 18:44:24.086240   12624 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0513 18:44:24.086240   12624 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0513 18:44:24.225792   12624 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0513 18:44:24.235825   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:24.649618   12624 main.go:110] libmachine: Using SSH client type: native
I0513 18:44:24.649618   12624 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
I0513 18:44:24.649618   12624 main.go:110] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0513 18:44:25.218271   12624 main.go:110] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
+++ /lib/systemd/system/docker.service.new	2020-05-14 01:44:24.222535169 +0000
@@ -8,24 +8,22 @@
 
 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 
 
 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity
 
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0
 
 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I0513 18:44:25.218271   12624 machine.go:89] provisioned docker machine in 4.3768616s
I0513 18:44:25.218271   12624 client.go:164] LocalClient.Create took 11.7340298s
I0513 18:44:25.218271   12624 start.go:145] duration metric: libmachine.API.Create for "minikube" took 11.7340298s
I0513 18:44:25.218271   12624 start.go:186] post-start starting for "minikube" (driver="docker")
I0513 18:44:25.218271   12624 start.go:196] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0513 18:44:25.232235   12624 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0513 18:44:25.240235   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:25.637229   12624 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 18:44:25.743639   12624 ssh_runner.go:148] Run: cat /etc/os-release
I0513 18:44:25.748604   12624 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0513 18:44:25.748604   12624 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0513 18:44:25.748604   12624 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0513 18:44:25.748604   12624 info.go:96] Remote host: Ubuntu 19.10
I0513 18:44:25.748604   12624 filesync.go:118] Scanning C:\Users\<user>\.minikube\addons for local assets ...
I0513 18:44:25.748604   12624 filesync.go:118] Scanning C:\Users\<user>\.minikube\files for local assets ...
I0513 18:44:25.749603   12624 start.go:189] post-start completed in 531.3317ms
I0513 18:44:25.752602   12624 start.go:107] duration metric: createHost completed in 12.2733623s
I0513 18:44:25.752602   12624 start.go:74] releasing machines lock for "minikube", held for 12.2733623s
I0513 18:44:25.763637   12624 cli_runner.go:108] Run: docker inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0513 18:44:26.179442   12624 profile.go:156] Saving config to C:\Users\<user>\.minikube\profiles\minikube\config.json ...
I0513 18:44:26.183116   12624 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0513 18:44:26.192107   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:26.199073   12624 ssh_runner.go:148] Run: systemctl --version
I0513 18:44:26.210106   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:26.625503   12624 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 18:44:26.651505   12624 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 18:44:26.751539   12624 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0513 18:44:26.762505   12624 cruntime.go:185] skipping containerd shutdown because we are bound to it
I0513 18:44:26.775540   12624 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0513 18:44:26.801539   12624 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0513 18:44:26.870541   12624 ssh_runner.go:148] Run: sudo systemctl start docker
I0513 18:44:26.890539   12624 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
* Preparing Kubernetes v1.18.2 on Docker 19.03.2 ...
I0513 18:44:26.945774   12624 cli_runner.go:108] Run: docker exec -t minikube dig +short host.docker.internal
I0513 18:44:27.334513   12624 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.1513963s)
I0513 18:44:27.440976   12624 network.go:57] got host ip for mount in container by digging dns: 10.17.65.2
I0513 18:44:27.440976   12624 start.go:251] checking
I0513 18:44:27.456992   12624 ssh_runner.go:148] Run: grep 10.17.65.2	host.minikube.internal$ /etc/hosts
I0513 18:44:27.463964   12624 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "10.17.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0513 18:44:27.487963   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
  - kubeadm.pod-network-cidr=10.244.0.0/16
I0513 18:44:27.901979   12624 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0513 18:44:27.902985   12624 preload.go:96] Found local preload: C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0513 18:44:27.911016   12624 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0513 18:44:27.949017   12624 docker.go:379] Got preloaded images: 
I0513 18:44:27.949017   12624 docker.go:384] k8s.gcr.io/kube-proxy:v1.18.2 wasn't preloaded
I0513 18:44:27.962014   12624 ssh_runner.go:148] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0513 18:44:27.985017   12624 ssh_runner.go:148] Run: which lz4
I0513 18:44:28.006013   12624 ssh_runner.go:148] Run: stat -c "%s %y" /preloaded.tar.lz4
I0513 18:44:28.012993   12624 ssh_runner.go:205] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I0513 18:44:28.012993   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (550953823 bytes)
I0513 18:44:38.145164   12624 docker.go:345] Took 10.153175 seconds to copy over tarball
I0513 18:44:38.158164   12624 ssh_runner.go:148] Run: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4
I0513 18:44:43.231285   12624 ssh_runner.go:188] Completed: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4: (5.0731179s)
I0513 18:44:43.231285   12624 ssh_runner.go:99] rm: /preloaded.tar.lz4
I0513 18:44:43.343350   12624 ssh_runner.go:148] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0513 18:44:43.353316   12624 ssh_runner.go:215] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3128 bytes)
I0513 18:44:43.383825   12624 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0513 18:44:43.452822   12624 ssh_runner.go:148] Run: sudo systemctl restart docker
I0513 18:44:44.950836   12624 ssh_runner.go:188] Completed: sudo systemctl restart docker: (1.4980126s)
I0513 18:44:44.958879   12624 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0513 18:44:45.000421   12624 docker.go:379] Got preloaded images: -- stdout --
kubernetesui/dashboard:v2.0.0
k8s.gcr.io/kube-proxy:v1.18.2
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
gcr.io/k8s-minikube/storage-provisioner:v1.8.1

-- /stdout --
I0513 18:44:45.000421   12624 cache_images.go:69] Images are preloaded, skipping loading
I0513 18:44:45.000421   12624 kubeadm.go:124] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.2 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0513 18:44:45.000421   12624 kubeadm.go:128] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 172.17.0.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 172.17.0.2:10249

I0513 18:44:45.009456   12624 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0513 18:44:45.057029   12624 kubeadm.go:737] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.2/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:}
I0513 18:44:45.071064   12624 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.18.2
I0513 18:44:45.081032   12624 binaries.go:43] Found k8s binaries, skipping transfer
I0513 18:44:45.094063   12624 ssh_runner.go:148] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0513 18:44:45.104031   12624 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1458 bytes)
I0513 18:44:45.122032   12624 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (532 bytes)
I0513 18:44:45.141035   12624 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0513 18:44:45.161030   12624 start.go:251] checking
I0513 18:44:45.174070   12624 ssh_runner.go:148] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
I0513 18:44:45.180041   12624 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0513 18:44:45.203065   12624 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0513 18:44:45.278335   12624 ssh_runner.go:148] Run: sudo systemctl start kubelet
I0513 18:44:45.295299   12624 certs.go:52] Setting up C:\Users\<user>\.minikube\profiles\minikube for IP: 172.17.0.2
I0513 18:44:45.295299   12624 certs.go:169] skipping minikubeCA CA generation: C:\Users\<user>\.minikube\ca.key
I0513 18:44:45.296302   12624 certs.go:169] skipping proxyClientCA CA generation: C:\Users\<user>\.minikube\proxy-client-ca.key
I0513 18:44:45.296302   12624 certs.go:267] generating minikube-user signed cert: C:\Users\<user>\.minikube\profiles\minikube\client.key
I0513 18:44:45.296302   12624 crypto.go:69] Generating cert C:\Users\<user>\.minikube\profiles\minikube\client.crt with IP's: []
I0513 18:44:45.373297   12624 crypto.go:157] Writing cert to C:\Users\<user>\.minikube\profiles\minikube\client.crt ...
I0513 18:44:45.373297   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\client.crt: {Name:mk762279d656356d328657ed3ff5ff476401dd38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:45.377298   12624 crypto.go:165] Writing key to C:\Users\<user>\.minikube\profiles\minikube\client.key ...
I0513 18:44:45.377298   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\client.key: {Name:mk05d45ecbe1986a628c8c430d55811fe08088f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:45.382308   12624 certs.go:267] generating minikube signed cert: C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f
I0513 18:44:45.382308   12624 crypto.go:69] Generating cert C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0513 18:44:45.537338   12624 crypto.go:157] Writing cert to C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f ...
I0513 18:44:45.537338   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f: {Name:mkf30c903369b0627ccbd028b34e439c6262538b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:45.542312   12624 crypto.go:165] Writing key to C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f ...
I0513 18:44:45.542312   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f: {Name:mkce5570a73f1fe64c6fad4a45f8970673940380 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:45.548361   12624 certs.go:278] copying C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt.7b749c5f -> C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt
I0513 18:44:45.550358   12624 certs.go:282] copying C:\Users\<user>\.minikube\profiles\minikube\apiserver.key.7b749c5f -> C:\Users\<user>\.minikube\profiles\minikube\apiserver.key
I0513 18:44:45.552376   12624 certs.go:267] generating aggregator signed cert: C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key
I0513 18:44:45.552376   12624 crypto.go:69] Generating cert C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt with IP's: []
I0513 18:44:45.712337   12624 crypto.go:157] Writing cert to C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt ...
I0513 18:44:45.712337   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt: {Name:mk5a9f11f3f7b57801d322dba07701f995c7356f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:45.717335   12624 crypto.go:165] Writing key to C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key ...
I0513 18:44:45.718300   12624 lock.go:35] WriteFile acquiring C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key: {Name:mk912815cb3875cbdf901f052a75aff368017a7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:45.723300   12624 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\ca-key.pem (1679 bytes)
I0513 18:44:45.723300   12624 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\ca.pem (1038 bytes)
I0513 18:44:45.724299   12624 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\cert.pem (1078 bytes)
I0513 18:44:45.724299   12624 certs.go:342] found cert: C:\Users\<user>\.minikube\certs\C:\Users\<user>\.minikube\certs\key.pem (1675 bytes)
I0513 18:44:45.725333   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I0513 18:44:45.744301   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0513 18:44:45.763730   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I0513 18:44:45.785167   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0513 18:44:45.805166   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I0513 18:44:45.824168   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0513 18:44:45.844165   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I0513 18:44:45.864167   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0513 18:44:45.884738   12624 ssh_runner.go:215] scp C:\Users\<user>\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I0513 18:44:45.904386   12624 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0513 18:44:45.938199   12624 ssh_runner.go:148] Run: openssl version
I0513 18:44:45.959200   12624 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0513 18:44:45.981201   12624 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0513 18:44:45.988215   12624 certs.go:383] hashing: -rw-r--r-- 1 root root 1066 May 13 16:57 /usr/share/ca-certificates/minikubeCA.pem
I0513 18:44:46.001293   12624 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0513 18:44:46.023167   12624 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0513 18:44:46.033167   12624 kubeadm.go:293] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3892 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0513 18:44:46.041201   12624 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0513 18:44:46.107166   12624 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0513 18:44:46.135199   12624 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0513 18:44:46.144166   12624 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I0513 18:44:46.159167   12624 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0513 18:44:46.169167   12624 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0513 18:44:46.169167   12624 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0513 18:44:56.756304   12624 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (10.5871293s)
I0513 18:44:56.756304   12624 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0513 18:44:56.766303   12624 ops.go:35] apiserver oom_adj: -16
I0513 18:44:56.773308   12624 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.2/kubectl label nodes minikube.k8s.io/version=v1.10.1 minikube.k8s.io/commit=63ab801ac27e5742ae442ce36dff7877dcccb278 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_05_13T18_44_56_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0513 18:44:56.773308   12624 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.18.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0513 18:44:57.260827   12624 kubeadm.go:868] duration metric: took 504.5227ms to wait for elevateKubeSystemPrivileges.
I0513 18:44:57.283826   12624 kubeadm.go:295] StartCluster complete in 11.2506506s
I0513 18:44:57.283826   12624 settings.go:123] acquiring lock: {Name:mk47b1af55da9543d5dc5a8134d40d87d83e1197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:57.284823   12624 settings.go:131] Updating kubeconfig:  C:\Users\<user>/.kube/config
I0513 18:44:57.286825   12624 lock.go:35] WriteFile acquiring C:\Users\<user>/.kube/config: {Name:mkfb29448095b1e10f04ea1bfff92578826b9eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0513 18:44:57.293828   12624 addons.go:320] enableAddons start: toEnable=map[], additional=[]
I0513 18:44:57.293828   12624 addons.go:50] Setting storage-provisioner=true in profile "minikube"
* Verifying Kubernetes components...
I0513 18:44:57.293828   12624 addons.go:50] Setting default-storageclass=true in profile "minikube"
I0513 18:44:57.295825   12624 addons.go:266] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0513 18:44:57.295825   12624 addons.go:126] Setting addon storage-provisioner=true in "minikube"
W0513 18:44:57.295825   12624 addons.go:135] addon storage-provisioner should already be in state true
I0513 18:44:57.295825   12624 host.go:65] Checking if "minikube" exists ...
I0513 18:44:57.307845   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0513 18:44:57.321824   12624 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 18:44:57.322824   12624 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 18:44:57.752960   12624 api_server.go:47] waiting for apiserver process to appear ...
I0513 18:44:57.764964   12624 addons.go:126] Setting addon default-storageclass=true in "minikube"
W0513 18:44:57.764964   12624 addons.go:135] addon default-storageclass should already be in state true
I0513 18:44:57.764964   12624 host.go:65] Checking if "minikube" exists ...
I0513 18:44:57.767963   12624 addons.go:233] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0513 18:44:57.767963   12624 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (1709 bytes)
I0513 18:44:57.772964   12624 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0513 18:44:57.778964   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:57.787961   12624 api_server.go:67] duration metric: took 494.1322ms to wait for apiserver process to appear ...
I0513 18:44:57.787961   12624 api_server.go:83] waiting for apiserver healthz status ...
I0513 18:44:57.787961   12624 api_server.go:193] Checking apiserver healthz at https://127.0.0.1:32780/healthz ...
I0513 18:44:57.790978   12624 cli_runner.go:108] Run: docker inspect minikube --format={{.State.Status}}
I0513 18:44:57.796975   12624 api_server.go:213] https://127.0.0.1:32780/healthz returned 200:
ok
I0513 18:44:57.799977   12624 api_server.go:136] control plane version: v1.18.2
I0513 18:44:57.799977   12624 api_server.go:126] duration metric: took 12.0158ms to wait for apiserver health ...
I0513 18:44:57.799977   12624 system_pods.go:43] waiting for kube-system pods to appear ...
I0513 18:44:57.814990   12624 system_pods.go:61] 4 kube-system pods found
I0513 18:44:57.814990   12624 system_pods.go:63] "etcd-minikube" [e50e126f-3569-4627-bc38-4d32ab542156] Pending
I0513 18:44:57.814990   12624 system_pods.go:63] "kube-apiserver-minikube" [ac6db4d3-655c-4cdf-9d11-5653cf948126] Running
I0513 18:44:57.814990   12624 system_pods.go:63] "kube-controller-manager-minikube" [bf2384f3-52a8-4410-afec-90ae8b94b097] Pending
I0513 18:44:57.814990   12624 system_pods.go:63] "kube-scheduler-minikube" [45c14fca-79f6-40f5-9977-159e1cecc3d9] Pending
I0513 18:44:57.814990   12624 system_pods.go:74] duration metric: took 15.0139ms to wait for pod list to return data ...
I0513 18:44:57.814990   12624 kubeadm.go:449] duration metric: took 521.1619ms to wait for : map[apiserver:true system_pods:true] ...
I0513 18:44:57.814990   12624 node_conditions.go:99] verifying NodePressure condition ...
I0513 18:44:57.820960   12624 node_conditions.go:111] node storage ephemeral capacity is 65792556Ki
I0513 18:44:57.820960   12624 node_conditions.go:112] node cpu capacity is 2
I0513 18:44:57.820960   12624 node_conditions.go:102] duration metric: took 5.9697ms to run NodePressure ...
I0513 18:44:58.209596   12624 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 18:44:58.222543   12624 addons.go:233] installing /etc/kubernetes/addons/storageclass.yaml
I0513 18:44:58.222543   12624 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0513 18:44:58.230542   12624 cli_runner.go:108] Run: docker inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0513 18:44:58.329236   12624 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0513 18:44:58.680201   12624 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:C:\Users\<user>\.minikube\machines\minikube\id_rsa Username:docker}
I0513 18:44:58.793964   12624 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
* Enabled addons: default-storageclass, storage-provisioner
I0513 18:44:58.920057   12624 addons.go:322] enableAddons completed in 1.6262277s
* Done! kubectl is now configured to use "minikube"
I0513 18:44:59.375026   12624 start.go:378] kubectl: 1.18.2, cluster: 1.18.2 (minor skew: 0)

@medyagh
Copy link
Member

medyagh commented May 14, 2020

The behavior that works is if I clear the shared preloaded tarball from the file sharing configuration of Docker Desktop and then cancel the request to share the tarball to the container, minikube can start successfully since it uses scp later in the process to copy the tarball into the running container.

@plnordquist interesting!!! thank you very much for providing this amount of detail ! good detective work ! so if you accept the Docke Desktop File Sharing then it wont work ? and it works when you disable the file sharing ? I wonder what happens if u disable preload with the file sharing

@medyagh
Copy link
Member

medyagh commented May 14, 2020

if you disable preload, would it work without any issues ?

minikube delete
minikube start --driver=docker --preload=false

@plnordquist
Copy link
Author

Yes using minikube start --driver=docker --preload=false works without any issues every time. I've used it and minikube delete a few times now and it successfully creates a minikube instance. I think it might be a little slower but minikube ends up caching images and kubernetes binaries so it's not as slow as waiting for docker in the container to pull the images.

@medyagh medyagh changed the title Minikube 1.10.0 fails to start [windows] [docker] preload conflicts with Docker Desktop File sharing May 14, 2020
@medyagh
Copy link
Member

medyagh commented May 14, 2020

thank you @plnordquist for reporting this, this is a bug. I wonder if we can find a Non-UI way to prevent docker to asking to share the folder with Docker Desktop ?

I also seen that notification on windows, whe minikube starts, docker desktop Asks to share file

@medyagh medyagh added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels May 14, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 14, 2020

Seems like the same type of race conditions, that we have with the podman driver. (see #8056)

I think the best long-term solution would be to just stop mounting all of /var as a volume... ?

@afbjorklund
Copy link
Collaborator

My suggestion is to move /var into a subdirectory of the volume, and then mount it somewhere.
Then you can do /var/lib/minikube and friends, either as bind mounts or as regular symlinks ?

The same we do it for the ISO. This also gives us a place to fix the storage driver persistence...
If we worry about backwards compatibility, we could set up some symlinks on existing volumes.

@medyagh
Copy link
Member

medyagh commented Jun 4, 2020

here is full log

PS C:\Users\medya\Downloads> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
PS C:\Users\medya\Downloads> .\minikube-windows-amd64v_1_11.exe start --driver=docker --alsologtostderr
I0603 17:54:22.358989    7164 start.go:98] hostinfo: {"hostname":"MEDYA1-W","uptime":143,"bootTime":1591231919,"procs":230,"os":"windows","platform":"Microsoft Windows 10 Enterprise","platformFamily":"Standalone Workstation","platformVersion":"10.0.18362 Build 18362","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"b4effa11-ef54-47d5-b2e4-c3a0780dd0d2"}
W0603 17:54:22.359987    7164 start.go:106] gopshost.Virtualization returned error: not implemented yet
* minikube v1.11.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
I0603 17:54:22.367968    7164 notify.go:125] Checking for updates...
I0603 17:54:22.367968    7164 driver.go:253] Setting default libvirt URI to qemu:///system
I0603 17:54:23.033424    7164 docker.go:95] docker version: linux-19.03.8
* Using the docker driver based on user configuration
I0603 17:54:23.035422    7164 start.go:214] selected driver: docker
I0603 17:54:23.035422    7164 start.go:611] validating driver "docker" against <nil>
I0603 17:54:23.035422    7164 start.go:617] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0603 17:54:23.036419    7164 start.go:935] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0603 17:54:23.036419    7164 start_flags.go:218] no existing cluster config was found, will generate one from the flags
I0603 17:54:23.041408    7164 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0603 17:54:25.611451    7164 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (2.5759999s)
I0603 17:54:25.611451    7164 start_flags.go:232] Using suggested 8100MB memory alloc based on sys=32619MB, container=9970MB
I0603 17:54:25.613451    7164 start_flags.go:556] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node minikube in cluster minikube
I0603 17:54:25.616439    7164 cache.go:105] Beginning downloading kic artifacts for docker with docker
I0603 17:54:26.281893    7164 image.go:88] Found gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 in local docker daemon, skipping pull
I0603 17:54:26.281893    7164 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0603 17:54:26.284888    7164 preload.go:103] Found local preload: C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0603 17:54:26.284888    7164 cache.go:49] Caching tarball of preloaded images
I0603 17:54:26.286883    7164 preload.go:129] Found C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0603 17:54:26.286883    7164 cache.go:52] Finished verifying existence of preloaded tar for  v1.18.3 on docker
I0603 17:54:26.287880    7164 profile.go:156] Saving config to C:\Users\medya\.minikube\profiles\minikube\config.json ...
I0603 17:54:26.289877    7164 lock.go:35] WriteFile acquiring C:\Users\medya\.minikube\profiles\minikube\config.json: {Name:mk1eb288c63b7363cab0f1c3ec04745eaac56c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 17:54:26.295861    7164 cache.go:152] Successfully downloaded all kic artifacts
I0603 17:54:26.295861    7164 start.go:240] acquiring machines lock for minikube: {Name:mkb5fc01d338b8709271974409864786bd6beddc Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0603 17:54:26.295861    7164 start.go:244] acquired machines lock for "minikube" in 0s
I0603 17:54:26.295861    7164 start.go:84] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
I0603 17:54:26.296861    7164 start.go:121] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=8100MB) ...
I0603 17:54:26.299853    7164 start.go:157] libmachine.API.Create for "minikube" (driver="docker")
I0603 17:54:26.299853    7164 client.go:161] LocalClient.Create starting
I0603 17:54:26.300849    7164 main.go:110] libmachine: Reading certificate data from C:\Users\medya\.minikube\certs\ca.pem
I0603 17:54:26.304842    7164 main.go:110] libmachine: Decoding PEM data...
I0603 17:54:26.306841    7164 main.go:110] libmachine: Parsing certificate...
I0603 17:54:26.307836    7164 main.go:110] libmachine: Reading certificate data from C:\Users\medya\.minikube\certs\cert.pem
I0603 17:54:26.311828    7164 main.go:110] libmachine: Decoding PEM data...
I0603 17:54:26.311828    7164 main.go:110] libmachine: Parsing certificate...
I0603 17:54:26.332777    7164 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0603 17:54:27.009208    7164 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0603 17:54:27.675663    7164 oci.go:98] Successfully created a docker volume minikube
I0603 17:54:27.675663    7164 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0603 17:54:27.676661    7164 preload.go:103] Found local preload: C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0603 17:54:27.677661    7164 kic.go:134] Starting extracting preloaded images to volume ...
I0603 17:54:27.682647    7164 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0603 17:54:27.683645    7164 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0603 17:54:30.251229    7164 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (2.5745355s)
I0603 17:54:30.257214    7164 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0603 17:54:32.866121    7164 cli_runner.go:150] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.6149541s)
I0603 17:54:32.874102    7164 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8100mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0603 17:54:34.445362    7164 cli_runner.go:150] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8100mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: (1.5749013s)
I0603 17:54:34.452345    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:35.155019    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:35.338595    7164 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (7.6726925s)
I0603 17:54:35.338595    7164 kic.go:139] duration metric: took 7.678690 seconds to extract preloaded images to volume
I0603 17:54:35.845418    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:36.541385    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:37.238016    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:37.950889    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:38.715779    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:39.457061    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:40.260200    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:41.106779    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:42.157644    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:43.563419    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:45.684290    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:47.538319    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:54:48.183956    7164 client.go:164] LocalClient.Create took 21.9338269s
I0603 17:54:50.180043    7164 start.go:124] duration metric: createHost completed in 23.9385386s
I0603 17:54:50.180043    7164 start.go:75] releasing machines lock for "minikube", held for 23.9395405s
I0603 17:54:50.195999    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0603 17:54:50.850454    7164 stop.go:36] StopHost: minikube
* Stopping "minikube" in docker ...
I0603 17:54:50.871209    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0603 17:54:51.509795    7164 stop.go:76] host is in state Stopped
I0603 17:54:51.509795    7164 main.go:110] libmachine: Stopping "minikube"...
I0603 17:54:51.520762    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0603 17:54:52.166057    7164 stop.go:56] stop err: Machine "minikube" is already stopped.
I0603 17:54:52.166057    7164 stop.go:59] host is already stopped
* Deleting "minikube" in docker ...
I0603 17:54:53.183364    7164 cli_runner.go:108] Run: docker container inspect -f {{.Id}} minikube
I0603 17:54:53.830232    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0603 17:54:54.476612    7164 cli_runner.go:108] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I0603 17:54:55.125628    7164 oci.go:544] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:

stderr:
Error response from daemon: Container 3619f149a785c9f0a679e00a5f750b2a8047a784a80c3a83ee2e8aae30ecbd64 is not running
I0603 17:54:56.128334    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Status}}
I0603 17:54:56.772808    7164 oci.go:552] container minikube status is Stopped
I0603 17:54:56.772808    7164 oci.go:564] Successfully shutdown container minikube
I0603 17:54:56.778823    7164 cli_runner.go:108] Run: docker rm -f -v minikube
I0603 17:54:57.448846    7164 cli_runner.go:108] Run: docker container inspect -f {{.Id}} minikube
! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
I0603 17:55:03.083118    7164 start.go:240] acquiring machines lock for minikube: {Name:mkb5fc01d338b8709271974409864786bd6beddc Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0603 17:56:41.759287    7164 start.go:244] acquired machines lock for "minikube" in 0s
I0603 17:56:41.770332    7164 start.go:84] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.3 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} &{Name: IP: Port:8443 KubernetesVersion:v1.18.3 ControlPlane:true Worker:true}
I0603 17:56:41.770332    7164 start.go:121] createHost starting for "" (driver="docker")
* Creating docker container (CPUs=2, Memory=8100MB) ...
I0603 17:56:41.772351    7164 start.go:157] libmachine.API.Create for "minikube" (driver="docker")
I0603 17:56:41.772351    7164 client.go:161] LocalClient.Create starting
I0603 17:56:41.772351    7164 main.go:110] libmachine: Reading certificate data from C:\Users\medya\.minikube\certs\ca.pem
I0603 17:56:41.775349    7164 main.go:110] libmachine: Decoding PEM data...
I0603 17:56:41.775349    7164 main.go:110] libmachine: Parsing certificate...
I0603 17:56:41.775349    7164 main.go:110] libmachine: Reading certificate data from C:\Users\medya\.minikube\certs\cert.pem
I0603 17:56:41.776319    7164 main.go:110] libmachine: Decoding PEM data...
I0603 17:56:41.776319    7164 main.go:110] libmachine: Parsing certificate...
I0603 17:56:41.789289    7164 cli_runner.go:108] Run: docker ps -a --format {{.Names}}
I0603 17:56:42.443503    7164 cli_runner.go:108] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0603 17:56:43.091034    7164 oci.go:98] Successfully created a docker volume minikube
I0603 17:56:43.091034    7164 preload.go:95] Checking if preload exists for k8s version v1.18.3 and runtime docker
I0603 17:56:43.092032    7164 preload.go:103] Found local preload: C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4
I0603 17:56:43.092999    7164 kic.go:134] Starting extracting preloaded images to volume ...
I0603 17:56:43.095994    7164 cli_runner.go:108] Run: docker system info --format "{{json .}}"
I0603 17:56:43.097989    7164 cli_runner.go:108] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir
I0603 17:56:45.660100    7164 cli_runner.go:150] Completed: docker system info --format "{{json .}}": (2.5700497s)
I0603 17:56:45.665088    7164 cli_runner.go:108] Run: docker info --format "'{{json .SecurityOptions}}'"
I0603 17:56:48.221911    7164 cli_runner.go:150] Completed: docker info --format "'{{json .SecurityOptions}}'": (2.5627484s)
I0603 17:56:48.226865    7164 cli_runner.go:108] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8100mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438
I0603 17:56:49.731434    7164 cli_runner.go:150] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var --cpus=2 --memory=8100mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438: (1.5080563s)
I0603 17:56:49.737418    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:50.321972    7164 cli_runner.go:150] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\medya\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir: (7.2407274s)
I0603 17:56:50.321972    7164 kic.go:139] duration metric: took 7.245729 seconds to extract preloaded images to volume
I0603 17:56:50.414140    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:51.081381    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:51.762519    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:52.448253    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:53.143576    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:53.910601    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:54.663396    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:55.471023    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:56.406370    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:57.596336    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:56:59.269524    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:57:00.809654    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:57:02.794544    7164 cli_runner.go:108] Run: docker container inspect minikube --format={{.State.Running}}
I0603 17:57:03.438640    7164 client.go:164] LocalClient.Create took 21.7165073s
I0603 17:57:05.434030    7164 start.go:124] duration metric: createHost completed in 23.7185457s
I0603 17:57:05.434030    7164 start.go:75] releasing machines lock for "minikube", held for 23.7195204s
* Failed to start docker container. "minikube start" may fix it: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
I0603 17:57:05.436028    7164 exit.go:58] WithError(error provisioning host)=Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet called from:
goroutine 1 [running]:
runtime/debug.Stack(0x40acf1, 0x18d85a0, 0x18bd240)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xa4
k8s.io/minikube/pkg/minikube/exit.WithError(0x1b44a0c, 0x17, 0x1e16f20, 0xc0008cc420)
        /app/pkg/minikube/exit/exit.go:58 +0x3b
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b6c200, 0xc000128520, 0x0, 0x2)
        /app/cmd/minikube/cmd/start.go:169 +0xac9
github.com/spf13/cobra.(*Command).execute(0x2b6c200, 0xc000128500, 0x2, 0x2, 0x2b6c200, 0xc000128500)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2b1
github.com/spf13/cobra.(*Command).ExecuteC(0x2b710c0, 0x0, 0x0, 0xc000004e01)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x350
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:112 +0x6f5
main.main()
        /app/cmd/minikube/main.go:66 +0xf1
W0603 17:57:05.437026    7164 out.go:201] error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
*
X error provisioning host: Failed to start host: creating host: create: creating: create kic node: check container "minikube" running: temporary error created container "minikube" is not running yet
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

@medyagh
Copy link
Member

medyagh commented Jun 4, 2020

@afbjorklund the current two docker run commands are these:

docker run -d -t --privileged 
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \ 
--tmpfs /tmp --tmpfs /run \
-v /lib/modules:/lib/modules:ro --hostname minikube \
--name minikube --label created_by.minikube.sigs.k8s.io=true \
--label name.minikube.sigs.k8s.io=minikube \
--label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube \
--volume minikube:/var \
--cpus=2 --memory=3900mb \
-e container=docker \
--expose 8443 \
--publish=127.0.0.1::8443 \
--publish=127.0.0.1::22 \
--publish=127.0.0.1::2376 \
--publish=127.0.0.1::5000 \
gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 

and

docker run --rm --entrypoint /usr/bin/tar -v C:\Users\<user>\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 -I lz4 -xvf /preloaded.tar -C /extractDir

if you can explain how we can do it without race condition , we could implement that way

@afbjorklund
Copy link
Collaborator

@medyagh : If we use a custom mountpoint also in the first run, then the /var files will stay in image...
So there is no need to copy e.g. /var/lib/dpkg, from the ubuntu layer to the minikube volume, anymore

But just like on the VM, we then need to creating the necessary symlinks or bindmounts to the mount
So in this case we would need something like the minikube-automount, in order to do the proper setup.

    mkdir -p /var/lib

    mkdir -p /mnt/$PARTNAME/var/lib/boot2docker
    mkdir /var/lib/boot2docker
    mount --bind /mnt/$PARTNAME/var/lib/boot2docker /var/lib/boot2docker

    mkdir -p /mnt/$PARTNAME/var/lib/docker
    mkdir -p /var/lib/docker
    mount --bind /mnt/$PARTNAME/var/lib/docker /var/lib/docker

    mkdir -p /mnt/$PARTNAME/var/lib/containerd
    mkdir -p /var/lib/containerd
    mount --bind /mnt/$PARTNAME/var/lib/containerd /var/lib/containerd

    mkdir -p /mnt/$PARTNAME/var/lib/containers
    mkdir -p /var/lib/containers
    mount --bind /mnt/$PARTNAME/var/lib/containers /var/lib/containers

    mkdir -p /mnt/$PARTNAME/var/log
    mkdir /var/log
    mount --bind /mnt/$PARTNAME/var/log /var/log

    mkdir -p /mnt/$PARTNAME/var/tmp
    mkdir /var/tmp
    mount --bind /mnt/$PARTNAME/var/tmp /var/tmp

    mkdir -p /mnt/$PARTNAME/var/lib/kubelet
    mkdir /var/lib/kubelet
    mount --bind /mnt/$PARTNAME/var/lib/kubelet /var/lib/kubelet

    mkdir -p /mnt/$PARTNAME/var/lib/cni
    mkdir /var/lib/cni
    mount --bind /mnt/$PARTNAME/var/lib/cni /var/lib/cni

    mkdir -p /mnt/$PARTNAME/data
    mkdir /data
    mount --bind /mnt/$PARTNAME/data /data

    mkdir -p /mnt/$PARTNAME/hostpath_pv
    mkdir /tmp/hostpath_pv
    mount --bind /mnt/$PARTNAME/hostpath_pv /tmp/hostpath_pv

    mkdir -p /mnt/$PARTNAME/hostpath-provisioner
    mkdir /tmp/hostpath-provisioner
    mount --bind /mnt/$PARTNAME/hostpath-provisioner /tmp/hostpath-provisioner

@priyawadhwa
Copy link

@afbjorklund I think I'm seeing this error trying to run minikube in Cloud Shell and it seems related to #8163 as well.

WDYT of changing docker's home directory? I'm not super familiar with this issue, but I'm wondering if that would resolve it. If we changed from /var/lib/docker to /tmp/var/lib/docker then the preload extract volume would be mounted to /tmp/var instead of /var.

Do you know if this would break anything? It seems to work fine testing locally on my Mac & in Cloud Shell.

@afbjorklund
Copy link
Collaborator

I don't think we will use a custom mountpoint initially, but just address the race.

It is likely that it is the /extractDir that is the real issue here, rather than the /var.

@afbjorklund
Copy link
Collaborator

This issue will be solved as a duplicate of #8151 since it is the same root cause.

@afbjorklund afbjorklund added the triage/duplicate Indicates an issue is a duplicate of other open issue. label Jul 20, 2020
@afbjorklund afbjorklund changed the title preload conflicts with Docker Desktop File sharing preload causes /var conflicts with Docker Desktop File sharing Jul 20, 2020
@medyagh medyagh closed this as completed Jul 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container co/preload kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

4 participants