Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: create tunnel for qemu #14615

Closed
wants to merge 19 commits into from
Closed

WIP: create tunnel for qemu #14615

wants to merge 19 commits into from

Conversation

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jul 20, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: klaases
Once this PR has been reviewed and has the lgtm label, please assign afbjorklund for approval by writing /assign @afbjorklund in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@klaases
Copy link
Contributor Author

klaases commented Jul 20, 2022

If the error says PROVIDER_QEMU2_NOT_FOUND then install QEMU.

$ ./out/minikube start --driver=qemu
😄  minikube v1.26.0 on Darwin 12.4 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration

🤷  Exiting due to PROVIDER_QEMU2_NOT_FOUND: The 'qemu2' provider was not found: exec: "qemu-system-aarch64": executable file not found in $PATH
💡  Suggestion: Install qemu-system
📘  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/qemu2/

The installation documentation shown above at https://minikube.sigs.k8s.io/docs/reference/drivers/qemu2/ was not found, however the same path with qemu without the 2 is found.

However, that page did include download instructions, which are here:
https://www.qemu.org/download/

@klaases
Copy link
Contributor Author

klaases commented Jul 20, 2022

Successfully starting up with QEMU on Mac M1 (ARM64):

$ ./out/minikube start --driver=qemu
😄  minikube v1.26.0 on Darwin 12.4 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration
💿  Downloading VM boot image ...
    > minikube-v1.26.0-1657340101...:  65 B / 65 B [---------] 100.00% ? p/s 0s
    > minikube-v1.26.0-1657340101...:  317.91 MiB / 317.91 MiB  100.00% 10.05 M
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=1988MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

@klaases klaases changed the title create tunnel for qemu WIP: create tunnel for qemu Jul 20, 2022
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 20, 2022
@klaases klaases self-assigned this Jul 20, 2022
@klaases
Copy link
Contributor Author

klaases commented Jul 21, 2022

Attempting to start a service with QEMU results in an error which we will try to solve for in this PR:

$ minikube service nginx-service

❌  Exiting due to MK_UNIMPLEMENTED: minikube service is not currently implemented with the qemu2 driver. See https://github.com/kubernetes/minikube/issues/14146 for details.

#14146

}

// NewServiceTunnel ...
func NewServiceTunnel(sshPort, sshKey string, v1Core typed_core.CoreV1Interface, suppressStdOut bool) *ServiceTunnel {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am curious how is this code different form the one in KIC, is there a way to reuse that code ? so when we need to improve one dont have to do it in two places

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is the same code, I am not sure why it was originally placed into the kic folder but I'll find out.

@klaases
Copy link
Contributor Author

klaases commented Jul 23, 2022

Running the following command yielded the subsequent output:

$ clear; ./out/minikube delete --all; make clean; make; echo ""; ./out/minikube start --driver=qemu; kubectl apply -f nginx.yaml; ./out/minikube service nginx-service;

Note: the following do not need to be run:
kubectl delete pod nginx; kubectl delete service nginx-service;

😄  minikube v1.26.0 on Darwin 12.4 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=1988MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
service/nginx-service created
Error from server (Forbidden): error when creating "nginx.yaml": pods "nginx" is forbidden: error looking up service account default/default: serviceaccount "default" not found
|-----------|---------------|-------------|--------------|
| NAMESPACE |     NAME      | TARGET PORT |     URL      |
|-----------|---------------|-------------|--------------|
| default   | nginx-service |             | No node port |
|-----------|---------------|-------------|--------------|
😿  service default/nginx-service has no node port
...

TODO: need to:

  • Create a pod.
  • Create a node port.

@klaases
Copy link
Contributor Author

klaases commented Jul 26, 2022

TODO(): Need to deploy a service with a node port.

Wait, for service to up, however, it is timing out:

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

@klaases
Copy link
Contributor Author

klaases commented Jul 26, 2022

Starting fresh with the following:

clear; ./out/minikube delete --all; make clean; make; echo ""; ./out/minikube start --driver=qemu

Wait for minikube to full start up, however on Mac OS M1 (ARM64) an error occurs (see below).

In addition, when trying to debug with: systemctl status kubelet, the result is bash: systemctl: command not found. As far as I can tell systemctl is only available on Linux.

Will switch over to my Linux machine and keep trying.

qemu2 ⌅1 ✭3
🔥  Deleting "minikube" in qemu2 ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles
rm -rf /Users/jklaas/minikube/out
rm -f pkg/minikube/assets/assets.go
rm -f pkg/minikube/translate/translations.go
rm -rf ./vendor
rm -rf /tmp/tmp.*.minikube_*
go build  -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.26.0 -X k8s.io/minikube/pkg/version.isoVersion=v1.26.0-1657340101-14534 -X k8s.io/minikube/pkg/version.gitCommitID="c88aaaf78692f4c10d2bd4e73139c37d6392427c-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -o out/minikube k8s.io/minikube/cmd/minikube

😄  minikube v1.26.0 on Darwin 12.4 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=1988MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0726 22:51:18.389540    1179 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...

@klaases
Copy link
Contributor Author

klaases commented Jul 26, 2022

On Linux, found the following error!

ERROR: Could not access KVM kernel module: Permission denied
qemu-system-x86_64: -accel kvm: failed to initialize kvm: Permission denied

$ ./out/minikube start --driver=qemu
😄  minikube v1.26.0 on Debian rodete (kvm/amd64)
✨  Using the qemu2 (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...- OUTPUT: 
ERROR: Could not access KVM kernel module: Permission denied
qemu-system-x86_64: -accel kvm: failed to initialize kvm: Permission denied


🔥  Deleting "minikube" in qemu2 ...
🤦  StartHost failed, but will try again: creating host: create: creating: exit status 1
🔥  Creating qemu2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...| OUTPUT: 
ERROR: Could not access KVM kernel module: Permission denied
qemu-system-x86_64: -accel kvm: failed to initialize kvm: Permission denied


😿  Failed to start qemu2 VM. Running "minikube delete" may fix it: creating host: create: creating: exit status 1

❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: exit status 1

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

@klaases
Copy link
Contributor Author

klaases commented Jul 26, 2022

Trying ./out/minikube start --driver=qemu with sudo to pass permission issues did not work.

qemu2 ✭7
$ ./out/minikube delete --all
🔥  Successfully deleted all profiles
23:17:02 jklaas/minikube
qemu2 ✭7
$ minikube delete --all
🔥  Successfully deleted all profiles
23:17:08 jklaas/minikube
qemu2 ✭7
$ sudo ./out/minikube start --driver=qemu
😄  minikube v1.26.0 on Debian rodete (kvm/amd64)

💢  Exiting due to GUEST_DRIVER_MISMATCH: The existing "minikube" cluster was created using the "none" driver, which is incompatible with requested "qemu" driver.
💡  Suggestion: Delete the existing 'minikube' cluster using: 'minikube delete', or start the existing 'minikube' cluster using: 'minikube start --driver=none'

@klaases
Copy link
Contributor Author

klaases commented Jul 27, 2022

@medyagh confirmed that: "we do NOT have support for KVM on linux" and "u would need to run it on Mac M1".

For development purposes, I will switch back to Mac M1 but it does not seem like minikube start with qemu is working on Mac M1 per: #14615 (comment)

Has ./out/minikube start --driver=qemu worked for others?

@klaases
Copy link
Contributor Author

klaases commented Jul 27, 2022

Found that #14412 expressed a similar error with regards to The kubelet is not running as shown above.

@klaases
Copy link
Contributor Author

klaases commented Jul 28, 2022

With the latest version of minikube development, I am unable to run Qemu on Mac M1.

clear; ./out/minikube delete --all; make clean; make; echo ""; ./out/minikube start --driver=qemu

Full Output

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.24.3 preload ...
    > preloaded-images-k8s-v18-v1...:  342.82 MiB / 342.82 MiB  100.00% 27.58 M
🔥  Creating qemu2 VM (CPUs=2, Memory=1988MB, Disk=20000MB) ...
❗  This VM is having trouble accessing https://k8s.gcr.io
💡  To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
🐳  Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
💢  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0728 20:19:44.856627    1182 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...

💣  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0728 20:23:47.509888    2185 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

❌  Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'

stderr:
W0728 20:23:47.509888    2185 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

💡  Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
🍿  Related issue: https://github.com/kubernetes/minikube/issues/4172

The problem may be due in part to the Qemu driver being too new.

$ brew info qemu
qemu: stable 7.0.0 (bottled), HEAD
Emulator for x86 and PowerPC
https://www.qemu.org/
/opt/homebrew/Cellar/qemu/7.0.0_1 (162 files, 610.9MB) *
  Poured from bottle on 2022-07-20 at 16:33:08
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/qemu.rb
License: GPL-2.0-only
==> Dependencies
Build: libtool ✔, meson ✘, ninja ✘, pkg-config ✔
Required: glib ✔, gnutls ✔, jpeg ✔, libpng ✔, libslirp ✔, libssh ✔, libusb ✔, lzo ✔, ncurses ✔, nettle ✔, pixman ✔, snappy ✔, vde ✔, zstd ✔
==> Options
--HEAD
	Install HEAD version
==> Analytics
install: 57,431 (30 days), 166,812 (90 days), 488,410 (365 days)
install-on-request: 37,201 (30 days), 102,749 (90 days), 308,525 (365 days)
build-error: 76 (30 days)

I will try downgrading Qemu and see if that helps minikube.

@klaases
Copy link
Contributor Author

klaases commented Jul 28, 2022

Rolling back qemu to see if I can get minikube running with Mac M1.

$ brew uninstall qemu
Uninstalling /opt/homebrew/Cellar/qemu/7.0.0_1... (162 files, 610.9MB)

Searching for previous releases:
https://download.qemu.org/

$ brew install qemu@6.2.0
Warning: No available formula with the name "qemu@6.2.0".
$ brew install qemu@6.1.1
Warning: No available formula with the name "qemu@6.1.1".
$ brew install qemu@6.0.0
Warning: No available formula with the name "qemu@6.0.0".

Unable to roll-back qemu on Mac M1, will try another machine.

@klaases
Copy link
Contributor Author

klaases commented Jul 29, 2022

Update: I was able to get this working on an external machine, however would like to get it working on my local machine for development purposes.

@klaases
Copy link
Contributor Author

klaases commented Jul 29, 2022

Re-trying on local machine, Mac M1.

Update 1 - running ./out/minikube ssh followed by systemctl status kubelet returned a Error getting node error.

Full Output for 'systemctl status kubelet'

● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Fri 2022-07-29 20:43:37 UTC; 10min ago
       Docs: http://kubernetes.io/docs/
   Main PID: 2288 (kubelet)
      Tasks: 14 (limit: 1914)
     Memory: 45.6M
     CGroup: /system.slice/kubelet.service
             └─2288 /var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/boots

Jul 29 20:54:16 minikube kubelet[2288]: E0729 20:54:16.761272    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:16 minikube kubelet[2288]: E0729 20:54:16.863724    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:16 minikube kubelet[2288]: E0729 20:54:16.969703    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.074631    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.175206    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.280161    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.381811    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.482647    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.583942    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:54:17 minikube kubelet[2288]: E0729 20:54:17.684897    2288 kubelet.go:2424] "Error getting node" err ...

Update 2 - running ./out/minikube ssh followed by systemctl status kubelet also returned a Error getting node error.

Full Output for 'journalctl -xeu kubelet'

● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
-- Journal begins at Fri 2022-07-29 20:39:18 UTC, ends at Fri 2022-07-29 20:57:28 UTC. --
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.319074    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.419906    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.521391    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.625133    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.725587    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.841062    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:55 minikube kubelet[2288]: E0729 20:55:55.945198    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:56 minikube kubelet[2288]: E0729 20:55:56.050096    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:56 minikube kubelet[2288]: E0729 20:55:56.050890    2288 event.go:276] Unable to write event: '&v1
Jul 29 20:55:56 minikube kubelet[2288]: E0729 20:55:56.154046    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:56 minikube kubelet[2288]: E0729 20:55:56.257445    2288 kubelet.go:2424] "Error getting node" err
Jul 29 20:55:56 minikube kubelet[2288]: E0729 20:55:56.360475 ...

@klaases
Copy link
Contributor Author

klaases commented Jul 30, 2022

There may be some hanging references to qemu, post uninstall.

  • I am thinking now that error has something to do with Qemu installation.
  • Tried finding pgrep qemu and stopping the process sudo kill -9 51362 to no avail.

Next:

  • I am going to try and uninstall qemu, which I had done a few times.
  • But now I will try restarting after.
  • I will also try removing all dependencies via brew as well, and restarting then too.
  • Because the same config was working fine before.
  • And I also tested that it is working MacStadium machine (also an M1).

brew uninstall qemu
brew autoremove

$ brew uninstall qemu
Uninstalling /opt/homebrew/Cellar/qemu/7.0.0_1... (162 files, 610.9MB)
$ brew autoremove
==> Uninstalling 32 unneeded formulae:
bdw-gc
gettext
glib
gmp
gnutls
guile
jpeg
libevent
libffi
libidn2
libpng
libslirp
libssh
libtasn1
libtool
libunistring
libusb
lz4
lzo
m4
ncurses
nettle
p11-kit
pcre
pixman
pkg-config
readline
snappy
unbound
vde
xz
zstd
Uninstalling /opt/homebrew/Cellar/libpng/1.6.37... (27 files, 1.3MB)
Uninstalling /opt/homebrew/Cellar/ncurses/6.3... (3,968 files, 9.6MB)
Uninstalling /opt/homebrew/Cellar/pixman/0.40.0... (11 files, 841.1KB)
Uninstalling /opt/homebrew/Cellar/libslirp/4.7.0... (11 files, 385KB)
Uninstalling /opt/homebrew/Cellar/vde/2.3.2_1... (73 files, 1.7MB)
Uninstalling /opt/homebrew/Cellar/snappy/1.1.9... (18 files, 158.7KB)
Uninstalling /opt/homebrew/Cellar/zstd/1.5.2... (31 files, 2.2MB)
Uninstalling /opt/homebrew/Cellar/libssh/0.9.6... (23 files, 1.2MB)
Uninstalling /opt/homebrew/Cellar/jpeg/9e... (21 files, 904.2KB)
Uninstalling /opt/homebrew/Cellar/libusb/1.0.26... (22 files, 595KB)
Uninstalling /opt/homebrew/Cellar/lzo/2.10... (31 files, 565.5KB)
Uninstalling /opt/homebrew/Cellar/gnutls/3.7.6... (1,285 files, 11MB)

Warning: The following gnutls configuration files have not been removed!
If desired, remove them manually with `rm -rf`:
  /opt/homebrew/etc/gnutls
  /opt/homebrew/etc/gnutls/cert.pem
Uninstalling /opt/homebrew/Cellar/libidn2/2.3.3... (78 files, 1MB)
Uninstalling /opt/homebrew/Cellar/nettle/3.8.1... (91 files, 2.9MB)
Uninstalling /opt/homebrew/Cellar/glib/2.72.3... (432 files, 21.1MB)
Uninstalling /opt/homebrew/Cellar/lz4/1.9.3... (22 files, 620.6KB)
Uninstalling /opt/homebrew/Cellar/xz/5.2.5_1... (95 files, 1.4MB)
Uninstalling /opt/homebrew/Cellar/unbound/1.16.1... (58 files, 5.6MB)

Warning: The following unbound configuration files have not been removed!
If desired, remove them manually with `rm -rf`:
  /opt/homebrew/etc/unbound
  /opt/homebrew/etc/unbound/unbound.conf
  /opt/homebrew/etc/unbound/unbound.conf.default
Uninstalling /opt/homebrew/Cellar/guile/3.0.8... (846 files, 62.6MB)
Uninstalling /opt/homebrew/Cellar/p11-kit/0.24.1... (67 files, 3.9MB)
Uninstalling /opt/homebrew/Cellar/pkg-config/0.29.2_3... (11 files, 676.4KB)
Uninstalling /opt/homebrew/Cellar/libtool/2.4.7... (75 files, 3.8MB)
Uninstalling /opt/homebrew/Cellar/gmp/6.2.1_1... (21 files, 3.2MB)
Uninstalling /opt/homebrew/Cellar/libunistring/1.0... (56 files, 5.0MB)
Uninstalling /opt/homebrew/Cellar/bdw-gc/8.0.6... (69 files, 1.7MB)
Uninstalling /opt/homebrew/Cellar/readline/8.1.2... (48 files, 1.7MB)
Uninstalling /opt/homebrew/Cellar/gettext/0.21... (1,953 files, 20.6MB)
Uninstalling /opt/homebrew/Cellar/libtasn1/4.18.0... (61 files, 662.7KB)
Uninstalling /opt/homebrew/Cellar/libevent/2.1.12... (57 files, 2.1MB)
Uninstalling /opt/homebrew/Cellar/pcre/8.45... (204 files, 4.6MB)
Uninstalling /opt/homebrew/Cellar/libffi/3.4.2... (17 files, 673.3KB)
Uninstalling /opt/homebrew/Cellar/m4/1.4.19... (13 files, 742.4KB)

$ rm -rf /opt/homebrew/etc/gnutls

$ rm -rf /opt/homebrew/etc/unbound/

restart...

@klaases
Copy link
Contributor Author

klaases commented Jul 30, 2022

After restart am still getting above error and timeout.

Not sure why:

❗ This VM is having trouble accessing https://k8s.gcr.io

???

@medyagh
Copy link
Member

medyagh commented Aug 1, 2022

After restart am still getting above error and timeout.

Not sure why:

❗ This VM is having trouble accessing https://k8s.gcr.io

???

you must be in a network that does not have access to k8s.gcr.io could u verify manually ? and change your network/proxy settings

@klaases
Copy link
Contributor Author

klaases commented Aug 1, 2022

After restart am still getting above error and timeout.
❗ This VM is having trouble accessing https://k8s.gcr.io

When running curl, I see the following:

$ curl "https://k8s.gcr.io"

<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="https://cloud.google.com/container-registry/">here</A>.
</BODY></HTML>

@klaases
Copy link
Contributor Author

klaases commented Aug 1, 2022

Here is a the lastStart.txt file for reference:

lastStart.txt

@klaases
Copy link
Contributor Author

klaases commented Aug 1, 2022

Per @medyagh's suggestion, I tried disabling acceleration with the following:

pkg/drivers/qemu/qemu.go L384
// startCmd = append(startCmd, "-accel", "hvf")

However, it did not work:

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.24.2 preload ...
    > preloaded-images-k8s-v18-v1...:  342.89 MiB / 342.89 MiB  100.00% 28.29 M
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...| OUTPUT: 
ERROR: qemu-system-aarch64: Addressing limited to 32 bits, but memory exceeds it by 1073741824 bytes



🔥  Deleting "minikube" in qemu2 ...
🤦  StartHost failed, but will try again: creating host: create: creating: exit status 1
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...\ OUTPUT: 
ERROR: qemu-system-aarch64: Addressing limited to 32 bits, but memory exceeds it by 1073741824 bytes



😿  Failed to start qemu2 VM. Running "minikube delete" may fix it: creating host: create: creating: exit status 1

❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: exit status 1

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Will look into alternative acceleration options.

@klaases
Copy link
Contributor Author

klaases commented Aug 1, 2022

For Mac M1, I found the following accelerators:

qemu-system-aarch64 -accel help
Accelerators supported in QEMU binary:
hvf
tcg

When running with tcg I get the following:

😄  minikube v1.26.0 on Darwin 12.5 (arm64)
✨  Using the qemu2 (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...| OUTPUT: 
ERROR: qemu-system-aarch64: Addressing limited to 32 bits, but memory exceeds it by 1073741824 bytes



🔥  Deleting "minikube" in qemu2 ...
🤦  StartHost failed, but will try again: creating host: create: creating: exit status 1
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...- OUTPUT: 
ERROR: qemu-system-aarch64: Addressing limited to 32 bits, but memory exceeds it by 1073741824 bytes



😿  Failed to start qemu2 VM. Running "minikube delete" may fix it: creating host: create: creating: exit status 1

❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: exit status 1

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

This does not occur with using hvf, which had worked in the past.

@klaases
Copy link
Contributor Author

klaases commented Aug 12, 2022

Same error Load failed: 37: Operation already in progress after restart.

@klaases
Copy link
Contributor Author

klaases commented Aug 16, 2022

The Load failed: 37: Operation already in progress error was solved with:

1 - first tried stopping the processes, to no avail:

% ps aux | grep vm

root               125   0.0  0.0 408525760   3168   ??  Ss   Fri05PM   0:00.65 /opt/socket_vmnet/bin/socket_vmnet --vmnet-gateway=192.168.105.1 /var/run/socket_vmnet
root                93   0.0  0.0 408655872   3232   ??  Ss   Fri05PM   0:00.31 /opt/vde/bin/vde_vmnet --vmnet-gateway=192.168.105.1 /var/run/vde.ctl

sudo kill -9 PID

However this did not help.


2 - tried unloading socket_vmnet which worked, however when running launchctl list | grep vm this item did not appear.

% sudo launchctl unload /Library/LaunchDaemons/io.github.lima-vm.socket_vmnet.plist

Successful install:

% sudo make PREFIX=/opt/socket_vmnet install                                       
mkdir -p "//opt/socket_vmnet/bin"
install socket_vmnet "//opt/socket_vmnet/bin/socket_vmnet"
install socket_vmnet_client "//opt/socket_vmnet/bin/socket_vmnet_client"
sed -e "s@/opt/socket_vmnet@/opt/socket_vmnet@g" launchd/io.github.lima-vm.socket_vmnet.plist > "/Library/LaunchDaemons/io.github.lima-vm.socket_vmnet.plist"
launchctl load -w "/Library/LaunchDaemons/io.github.lima-vm.socket_vmnet.plist"

@klaases
Copy link
Contributor Author

klaases commented Aug 16, 2022

Start socket_vmnet:

/opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -device virtio-net-pci,netdev=net0 -netdev socket,id=net0,fd=3 -m 1024 -accel hvf -cdrom ~/boot2docker.iso

Start Qemu:

qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.0.0_2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/klaases/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/klaases/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/klaases/.minikube/machines/minikube/qemu.pid -nic vde,model=virtio,sock= -daemonize /Users/klaases/.minikube/machines/minikube/disk.qcow2

Path to Qemu on MacOS:
/opt/homebrew/Cellar/qemu/7.0.0_2/bin/qemu-system-aarch64

@klaases klaases force-pushed the qemu2 branch 2 times, most recently from da65f86 to e57a721 Compare August 16, 2022 21:02
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 22, 2022
@k8s-ci-robot
Copy link
Contributor

@klaases: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spowelljr
Copy link
Member

Closing this due to #14989

@spowelljr spowelljr closed this Sep 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants