Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM drivers: Fix images getting removed on stop/start #16655

Merged
merged 1 commit into from
Jun 13, 2023

Conversation

spowelljr
Copy link
Member

@spowelljr spowelljr commented Jun 8, 2023

Fixes #12217

For full details see: #12217 (comment)

The short is that on a VM driver using the containerd or cri-o runtime, when starting after a stop, we would check for the existence of preload images prior to CRIs being configured and started, resulting in an empty image list being returned. This would then trigger to untar the preload and overwrite all the existing image on the machine. There's another preload check later on after the CRIs are configured and started, which is why the wasn't an issue with Docker driver.

Also reduces startup time on subsequent starts by 40 seconds (40%)

Before:

$ minikube start --driver kvm --container-runtime=containerd
$ minikube image pull gcr.io/k8s-minikube/busybox
$ minikube stop
$ time minikube start
real	1m40.979s
user	0m1.140s
sys	0m1.120s
$ minikube image list | grep "busybox"
<no output>

After:

$ minikube start --driver kvm --container-runtime=containerd
$ minikube image pull gcr.io/k8s-minikube/busybox
$ minikube stop
$ time minikube start
real	1m0.393s
user	0m0.673s
sys	0m0.315s
$ minikube image list | grep "busybox"
gcr.io/k8s-minikube/busybox:latest

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 8, 2023
@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Jun 8, 2023
@spowelljr
Copy link
Member Author

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Jun 8, 2023
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16655) |
+----------------+----------+---------------------+
| minikube start | 53.6s    | 55.0s               |
| enable ingress | 28.5s    | 28.8s               |
+----------------+----------+---------------------+

Times for minikube start: 53.9s 53.9s 54.3s 53.5s 52.6s
Times for minikube (PR 16655) start: 54.7s 53.6s 57.9s 54.5s 54.0s

Times for minikube ingress: 27.8s 27.8s 28.3s 29.3s 29.3s
Times for minikube (PR 16655) ingress: 28.8s 28.8s 28.9s 29.4s 27.8s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16655) |
+----------------+----------+---------------------+
| minikube start | 24.5s    | 25.1s               |
| enable ingress | 20.8s    | 21.2s               |
+----------------+----------+---------------------+

Times for minikube ingress: 21.0s 21.0s 20.5s 21.0s 20.9s
Times for minikube (PR 16655) ingress: 20.9s 21.0s 21.9s 20.9s 21.4s

Times for minikube (PR 16655) start: 25.2s 26.2s 26.0s 23.4s 24.8s
Times for minikube start: 22.9s 25.7s 26.1s 22.5s 25.3s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 16655) |
+----------------+----------+---------------------+
| minikube start | 23.8s    | 22.0s               |
| enable ingress | 26.8s    | 31.6s               |
+----------------+----------+---------------------+

Times for minikube start: 24.5s 24.5s 21.4s 24.5s 23.8s
Times for minikube (PR 16655) start: 21.1s 21.8s 21.8s 21.7s 23.6s

Times for minikube ingress: 31.4s 21.4s 18.4s 31.4s 31.4s
Times for minikube (PR 16655) ingress: 31.4s 31.4s 31.4s 31.4s 32.4s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Hyperkit_macOS TestStoppedBinaryUpgrade/MinikubeLogs (gopogh) 0.00 (chart)
QEMU_macOS TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (gopogh) 0.00 (chart)
Docker_Linux_containerd_arm64 TestKubernetesUpgrade (gopogh) 0.68 (chart)
Hyperkit_macOS TestNetworkPlugins/group/custom-flannel/Start (gopogh) 0.71 (chart)
QEMU_macOS TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (gopogh) 4.41 (chart)
QEMU_macOS TestIngressAddonLegacy/StartLegacyK8sCluster (gopogh) 5.15 (chart)
Hyperkit_macOS TestStoppedBinaryUpgrade/Upgrade (gopogh) 9.93 (chart)
Hyperkit_macOS TestRunningBinaryUpgrade (gopogh) 11.35 (chart)
QEMU_macOS TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (gopogh) 45.59 (chart)

To see the flake rates of all tests by environment, click here.

klog.Infof("preload failed, will try to load cached images: %v", err)
switch err.(type) {
case *cruntime.ErrISOFeature:
out.ErrT(style.Tip, "Existing disk is missing new features ({{.error}}). To upgrade, run 'minikube delete'", out.V{"error": err})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If deleteOnFailure flag is specified minikube should delete and recreate

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You want that added to the message?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm...I was thinking maybe if the user allows us to delete minikube we just do it
if --deleteOnFailure...then recreate the cluster instead of telling the user to do it..

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried using --delete-on-failure, but it seems that it only applies to creating the cluster, since the cluster is already created once we get here and this is K8s logic it doesn't retry. If we want this I'd recommend it be in a separate PR since there will have to be a lot of logic added for it.

minikube start --driver qemu --delete-on-failure=true
😄  minikube v1.30.1 on Darwin 13.4 (arm64)
✨  Using the qemu2 driver based on user configuration
🌐  Automatically selected the socket_vmnet network
👍  Starting control plane node minikube in cluster minikube
🔥  Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...

❌  Exiting due to K8S_INSTALL_FAILED: Failed to update cluster: this error is expected

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│    😿  If the above advice does not help, please let us know:                             │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

klog.Infof("preload failed, will try to load cached images: %v", err)
switch err.(type) {
case *cruntime.ErrISOFeature:
out.ErrT(style.Tip, "Existing disk is missing new features ({{.error}}). To upgrade, run 'minikube delete'", out.V{"error": err})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm...I was thinking maybe if the user allows us to delete minikube we just do it
if --deleteOnFailure...then recreate the cluster instead of telling the user to do it..

@medyagh
Copy link
Member

medyagh commented Jun 9, 2023

Really good job on this PR @spowelljr

KVM_Linux — Jenkins: completed with success in 62.81 minutes.
Details
KVM_Linux_containerd — Jenkins: completed with success in 59.66 minutes.

and also makes KVM Containerd test much faster
before this PR it was taking 70+ mins to finish KVM containerd now 59 mins
11 min faster KVM test

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: medyagh, spowelljr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@spowelljr spowelljr merged commit dcc3fd2 into kubernetes:master Jun 13, 2023
@spowelljr spowelljr deleted the fixISORestart branch June 13, 2023 17:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Frequent test failures of TestPreload
4 participants