Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to create extra disks on hyperkit vms #11483

Merged

Conversation

BlaineEXE
Copy link
Contributor

@BlaineEXE BlaineEXE commented May 21, 2021

Add the ability to create and attach extra disks to hyperkit vms.

For example, if I wish to add 3 extra disk to my minikube VM, each 10GB in size, the command with the 2 new options would look like below.

> minikube start --driver=hyperkit --extra-disks=3 --extra-disk-size=10gb

In the minikube VM, I would expect to see the extra disks (in addition to vda and vda1 normally present) available in /dev as shown below.

> minikube ssh
$  ls /dev/vd*
vda
vda1
vdb
vdc
vdd

Signed-off-by: Blaine Gardner blaine.gardner@redhat.com

Fixes #3883
This will be useful for me for developing on Rook, and I think it maybe useful for others doing other types of storage development on Kubernetes with minikube.

@k8s-ci-robot
Copy link
Contributor

Welcome @BlaineEXE!

It looks like this is your first PR to kubernetes/minikube 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/minikube has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @BlaineEXE. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 21, 2021
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels May 21, 2021
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BlaineEXE thank you for contributing this interesting PR. do u mind adding in the PR description the After this PR in action ?

@BlaineEXE
Copy link
Contributor Author

@BlaineEXE thank you for contributing this interesting PR. do u mind adding in the PR description the After this PR in action ?

I'm happy to add more details or move them as is most helpful. This is my first PR to the minikube repo, and I'm not quite sure what you mean exactly, though. I don't see anything like that in the contributor guide to help me get there.

@medyagh
Copy link
Member

medyagh commented May 21, 2021

@BlaineEXE thank you for contributing this interesting PR. do u mind adding in the PR description the After this PR in action ?

I'm happy to add more details or move them as is most helpful. This is my first PR to the minikube repo, and I'm not quite sure what you mean exactly, though. I don't see anything like that in the contributor guide to help me get there.

thank you for reading the contributor guide and welcome to contributing to minikube. I meant like if you run the PR with this new feature locally and paste the output of how u would use this new feature in the PR description, so it be easier for me to review it

@BlaineEXE
Copy link
Contributor Author

thank you for reading the contributor guide and welcome to contributing to minikube. I meant like if you run the PR with this new feature locally and paste the output of how u would use this new feature in the PR description, so it be easier for me to review it

Done. Let me know if more details would be helpful.

@BlaineEXE
Copy link
Contributor Author

@medyagh are you able to approve running workflows? I think to fix the issue I mentioned I need to make a change to the deploy/iso/minikube-iso/package/automount/minikube-automount script to support multiple disks, and I'd like to be able to use the ISO created by the CI if possible since it seems to take a very long time to build on my machine and failed recently for me.

@BlaineEXE BlaineEXE force-pushed the add-extra-disks-to-hyperkit-vms branch 2 times, most recently from dc23417 to 5a30c47 Compare May 24, 2021 22:38
@@ -164,6 +166,8 @@ func initMinikubeFlags() {
startCmd.Flags().StringP(network, "", "", "network to run minikube with. Now it is used by docker/podman and KVM drivers. If left empty, minikube will create a new network.")
startCmd.Flags().StringVarP(&outputFormat, "output", "o", "text", "Format to print stdout in. Options include: [text,json]")
startCmd.Flags().StringP(trace, "", "", "Send trace events. Options include: [gcp]")
startCmd.Flags().Int(extraDisks, 0, "Number of extra disks created and attached to the minikube VM.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add (only used by HyperKit driver)

@@ -164,6 +166,8 @@ func initMinikubeFlags() {
startCmd.Flags().StringP(network, "", "", "network to run minikube with. Now it is used by docker/podman and KVM drivers. If left empty, minikube will create a new network.")
startCmd.Flags().StringVarP(&outputFormat, "output", "o", "text", "Format to print stdout in. Options include: [text,json]")
startCmd.Flags().StringP(trace, "", "", "Send trace events. Options include: [gcp]")
startCmd.Flags().Int(extraDisks, 0, "Number of extra disks created and attached to the minikube VM.")
startCmd.Flags().String(extraDiskSize, defaultDiskSize, "Disk size allocated for extra disks attached to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g).")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add only used by hyperkit driver

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opted to message this as currently only implemented for hyperkit driver since I think that makes it more clear that it can be implemented for other drivers but isn't currently. Let me know if your wording is still preferred.

pkg/drivers/common.go Outdated Show resolved Hide resolved
@@ -525,9 +531,15 @@ func updateExistingConfigFromFlags(cmd *cobra.Command, existing *config.ClusterC
// validate the memory size in case user changed their system memory limits (example change docker desktop or upgraded memory.)
validateRequestedMemorySize(cc.Memory, cc.Driver)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a warning if user added this flag to non-hyperkit drivers, that

out.WarningT("Specifying extra disk is currently only supported for hyperkit driver, if you can contribute to add this feature please create a PR....")

Copy link
Contributor Author

@BlaineEXE BlaineEXE May 25, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this check for flags on both create and update (see checkExtraDiskOptions). If I were a user, I think I would expect using these flags for unsupported drivers would cause a failure since the option isn't supported. Would it be better for me to return an error instead of just logging a warning?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we are not passing the option to the driver if it doesnt suppor it, it wont do anything, I would be okay with ExitWith Usage or just warning. your call !

Copy link
Contributor Author

@BlaineEXE BlaineEXE May 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Edited] It seems like using a warning is the trend elsewhere, so I will stick with that.

@BlaineEXE BlaineEXE force-pushed the add-extra-disks-to-hyperkit-vms branch 2 times, most recently from b9d94de to 410b3da Compare May 25, 2021 00:31
Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also consider adding to the Hyperkit website documentation a section called

"driver specific features"

  • add extra disk...
    example ...

https://minikube.sigs.k8s.io/docs/drivers/hyperkit/

@@ -118,6 +118,8 @@ const (
defaultSSHUser = "root"
defaultSSHPort = 22
listenAddress = "listen-address"
extraDisks = "extra-disks"
extraDiskSize = "extra-disk-size"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the sake of not having too many flags, what do u think/feel about re-suing the existing the flag we have disk-size

$ minikube start --help | grep disk
      --disk-size='20000mb': Disk size allocated to the minikube VM (format: <number>[<unit>], where unit = b, k, m or g).

so if user passes this flag we will use disk-size value as the extra disk size and if they pass --disk-size it will be used for both main disk and extra disk, is that something acceptable ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that idea, and I thought about doing that. It is more flexible to have separate options, but it does add complexity for the user. I keep waffling on this. For all of the use-cases I have in mind, I don't think there is a very strong reason to have the extra parameter for development just to save a few 10s of gigabytes on a hard disk. I think I will take your suggestion to simply re-use the disk-size parameter for extra disks as well, and if that ends up being insufficient, it can always be changed in the future.

@BlaineEXE BlaineEXE force-pushed the add-extra-disks-to-hyperkit-vms branch 4 times, most recently from a39290d to 456c49b Compare May 26, 2021 14:19
@BlaineEXE
Copy link
Contributor Author

BlaineEXE commented May 26, 2021

@medyagh the ISO build keeps failing on my machine, so I'm unable to test my latest changes to the ISO. It's my understanding from here that the CI will build the ISO which I can test locally. Are you able to approve running the CI here so I can test this?

@medyagh
Copy link
Member

medyagh commented May 27, 2021

@BlaineEXE no worries we can build the ISO on the PR

@medyagh
Copy link
Member

medyagh commented May 27, 2021

/ok-to-build-iso

@medyagh
Copy link
Member

medyagh commented May 27, 2021

ok-to-build-iso

@medyagh medyagh self-requested a review May 27, 2021 01:23
Copy link

@leseb leseb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see code to clean up the disks, is this handled automatically when removing the VM? Just want to make sure we don't have disk files lingering around when the env is destroyed.

@BlaineEXE
Copy link
Contributor Author

BlaineEXE commented May 27, 2021

RE: @leseb

I don't see code to clean up the disks, is this handled automatically when removing the VM? Just want to make sure we don't have disk files lingering around when the env is destroyed.

I don't think it's necessary. When creating a VM, minikube creates a directory for each machine in the config directory (e.g., $HOME/.minikube/machines/<machine-name>), and when the machine is deleted the dir (with the raw disk files) is deleted also.

@sharifelgamal
Copy link
Collaborator

/ok-to-test

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 50.6s    | 51.0s               |
| enable ingress | 36.6s    | 36.1s               |
+----------------+----------+---------------------+

Times for minikube start: 53.2s 49.3s 47.4s 52.9s 50.3s
Times for minikube (PR 11483) start: 50.8s 50.3s 55.9s 50.8s 47.1s

Times for minikube (PR 11483) ingress: 34.8s 34.9s 34.5s 34.2s 42.3s
Times for minikube ingress: 35.8s 34.3s 36.8s 41.7s 34.7s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 23.4s    | 22.8s               |
| enable ingress | 31.4s    | 29.3s               |
+----------------+----------+---------------------+

Times for minikube ingress: 31.5s 32.7s 33.0s 28.5s 31.5s
Times for minikube (PR 11483) ingress: 29.5s 28.1s 31.0s 29.0s 29.0s

Times for minikube start: 23.9s 24.6s 22.8s 23.2s 22.7s
Times for minikube (PR 11483) start: 22.4s 24.9s 22.9s 21.9s 21.6s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 42.1s    | 44.4s               |
| enable ingress |          |                     |
+----------------+----------+---------------------+

Times for minikube start: 32.8s 43.3s 43.4s 47.2s 43.9s
Times for minikube (PR 11483) start: 44.6s 43.6s 43.2s 46.8s 43.7s

@medyagh
Copy link
Member

medyagh commented Jun 21, 2021

@BlaineEXE sorry for long wait on this PR, do u mind pulling upstream ?

@BlaineEXE BlaineEXE force-pushed the add-extra-disks-to-hyperkit-vms branch from f5f504f to e51052e Compare June 23, 2021 15:56
@BlaineEXE
Copy link
Contributor Author

No problem @medyagh. Thanks for looking. I just updated on the upstream master.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 46.6s    | 47.3s               |
| enable ingress | 38.5s    | 37.7s               |
+----------------+----------+---------------------+

Times for minikube start: 47.5s 46.6s 47.2s 46.3s 45.3s
Times for minikube (PR 11483) start: 46.4s 47.9s 48.8s 47.4s 46.1s

Times for minikube ingress: 39.2s 42.7s 34.2s 33.8s 42.7s
Times for minikube (PR 11483) ingress: 34.7s 42.7s 33.7s 42.2s 35.3s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 22.0s    | 21.9s               |
| enable ingress | 36.5s    | 33.8s               |
+----------------+----------+---------------------+

Times for minikube start: 23.7s 21.2s 22.2s 21.9s 21.1s
Times for minikube (PR 11483) start: 21.6s 22.4s 21.6s 22.0s 21.9s

Times for minikube ingress: 36.0s 33.5s 38.0s 38.0s 37.0s
Times for minikube (PR 11483) ingress: 34.5s 33.5s 34.5s 33.0s 33.5s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 41.8s    | 45.6s               |
| enable ingress |          |                     |
+----------------+----------+---------------------+

Times for minikube start: 32.4s 42.8s 42.9s 43.9s 47.1s
Times for minikube (PR 11483) start: 47.1s 46.6s 43.9s 43.2s 47.3s

@minikube-bot
Copy link
Collaborator

These are the flake rates of all failed tests on Docker_Linux.

Failed Tests Flake Rate (%)
TestStartStop/group/embed-certs/serial/Pause 5.56 (chart)

@minikube-bot
Copy link
Collaborator

These are the flake rates of all failed tests on Docker_Linux_containerd.

Failed Tests Flake Rate (%)
TestFunctional/parallel/LogsCmd 4.35 (chart)
TestFunctional/parallel/LogsFileCmd 4.35 (chart)

@BlaineEXE
Copy link
Contributor Author

@medyagh It's a little unclear to me if the Docker_Linux and Docker_Linux_containerd tests have failed or passed. It seems like there were 267 and 256 successes respectively with some flakiness. Is that right?

Are there more things I should be doing from here to get this merged?

@spowelljr
Copy link
Member

/retest-this-please

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 51.9s    | 51.7s               |
| enable ingress | 31.9s    | 33.7s               |
+----------------+----------+---------------------+

Times for minikube ingress: 31.3s 32.3s 31.3s 32.8s 31.9s
Times for minikube (PR 11483) ingress: 32.4s 41.3s 31.3s 32.3s 31.3s

Times for minikube start: 53.8s 51.1s 51.2s 52.1s 51.6s
Times for minikube (PR 11483) start: 51.6s 52.2s 50.6s 52.5s 51.4s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 22.4s    | 22.4s               |
| enable ingress | 32.0s    | 34.5s               |
+----------------+----------+---------------------+

Times for minikube (PR 11483) start: 23.0s 21.9s 21.9s 21.7s 23.2s
Times for minikube start: 23.5s 22.3s 22.0s 21.9s 22.4s

Times for minikube ingress: 27.0s 34.5s 34.5s 36.0s 28.0s
Times for minikube (PR 11483) ingress: 35.5s 36.5s 37.5s 34.5s 28.5s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 41.6s    | 44.0s               |
| enable ingress |          |                     |
+----------------+----------+---------------------+

Times for minikube start: 32.1s 45.0s 44.3s 43.1s 43.7s
Times for minikube (PR 11483) start: 44.4s 43.5s 44.3s 43.8s 44.1s

Add the ability to create and attach extra disks to hyperkit vms.

Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
@BlaineEXE BlaineEXE force-pushed the add-extra-disks-to-hyperkit-vms branch from e51052e to 8f05ee0 Compare July 28, 2021 15:57
@sharifelgamal
Copy link
Collaborator

ok-to-build-iso

@sharifelgamal
Copy link
Collaborator

@BlaineEXE, sorry for the delay here. we'll rebuild the new ISO and run the tests one more time here. if there's a failure outside macOS, it's highly unlikely to be related to this PR.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 47.1s    | 48.3s               |
| enable ingress | 32.5s    | 35.1s               |
+----------------+----------+---------------------+

Times for minikube start: 48.5s 47.7s 47.1s 46.4s 45.9s
Times for minikube (PR 11483) start: 46.4s 47.7s 47.4s 49.8s 49.9s

Times for minikube ingress: 32.3s 32.2s 33.8s 31.7s 32.2s
Times for minikube (PR 11483) ingress: 39.7s 39.8s 31.3s 32.2s 32.3s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 22.5s    | 21.8s               |
| enable ingress | 36.0s    | 35.4s               |
+----------------+----------+---------------------+

Times for minikube ingress: 35.0s 35.5s 36.5s 36.5s 36.5s
Times for minikube (PR 11483) ingress: 35.5s 37.0s 35.5s 35.5s 33.5s

Times for minikube start: 23.7s 22.1s 21.3s 21.5s 24.0s
Times for minikube (PR 11483) start: 22.3s 21.2s 22.1s 22.2s 21.1s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 38.8s    | 42.1s               |
| enable ingress |          |                     |
+----------------+----------+---------------------+

Times for minikube start: 31.9s 37.0s 43.7s 43.8s 37.4s
Times for minikube (PR 11483) start: 43.8s 43.1s 43.0s 36.2s 44.1s

@minikube-bot
Copy link
Collaborator

Hi @BlaineEXE, we have updated your PR with the reference to newly built ISO. Pull the changes locally if you want to test with them or update your PR further.

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd_arm64 TestNetworkPlugins/group/kindnet/NetCatPod (gopogh) 0.00 (chart)
Docker_Linux_docker_arm64 TestFunctional/parallel/DashboardCmd (gopogh) 0.00 (chart)
Docker_macOS TestStartStop/group/default-k8s-different-port/serial/Pause (gopogh) 0.00 (chart)
Docker_Linux_docker_arm64 TestNetworkPlugins/group/enable-default-cni/Start (gopogh) 5.41 (chart)
Docker_Linux_containerd TestStartStop/group/no-preload/serial/Pause (gopogh) 6.12 (chart)
Docker_Linux_crio TestStartStop/group/old-k8s-version/serial/Pause (gopogh) 6.25 (chart)
Hyperkit_macOS TestStartStop/group/no-preload/serial/AddonExistsAfterStop (gopogh) 6.98 (chart)
Hyperkit_macOS TestStartStop/group/no-preload/serial/DeployApp (gopogh) 6.98 (chart)
Hyperkit_macOS TestNetworkPlugins/group/cilium/Start (gopogh) 9.30 (chart)
Docker_Linux_crio TestPause/serial/Pause (gopogh) 10.42 (chart)
Docker_Linux_crio TestPause/serial/VerifyStatus (gopogh) 10.42 (chart)
Docker_macOS TestStartStop/group/embed-certs/serial/Pause (gopogh) 10.64 (chart)
Docker_Linux_docker_arm64 TestNetworkPlugins/group/bridge/Start (gopogh) 13.51 (chart)
Docker_Linux_docker_arm64 TestNetworkPlugins/group/kubenet/Start (gopogh) 13.51 (chart)
Docker_Linux_crio TestPause/serial/PauseAgain (gopogh) 16.67 (chart)
Docker_Linux_containerd TestPause/serial/Pause (gopogh) 18.37 (chart)
Docker_Linux_containerd TestPause/serial/VerifyStatus (gopogh) 18.37 (chart)
Hyperkit_macOS TestStartStop/group/default-k8s-different-port/serial/DeployApp (gopogh) 18.60 (chart)
Docker_Linux_crio TestNetworkPlugins/group/calico/DNS (gopogh) 18.75 (chart)
Docker_Linux_docker_arm64 TestNetworkPlugins/group/kindnet/Start (gopogh) 21.62 (chart)
Hyperkit_macOS TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (gopogh) 25.58 (chart)
Hyperkit_macOS TestNetworkPlugins/group/calico/ControllerPod (gopogh) 25.81 (chart)
Docker_Linux_containerd TestPause/serial/PauseAgain (gopogh) 26.53 (chart)
Docker_Linux_crio TestStartStop/group/default-k8s-different-port/serial/Stop (gopogh) 27.08 (chart)
Hyperkit_macOS TestSkaffold (gopogh) 30.23 (chart)
Hyperkit_macOS TestMultiNode/serial/DeployApp2Nodes (gopogh) 32.56 (chart)
Hyperkit_macOS TestMultiNode/serial/PingHostFrom2Pods (gopogh) 32.56 (chart)
Docker_Linux_crio TestMultiNode/serial/PingHostFrom2Pods (gopogh) 33.33 (chart)
Docker_macOS TestNetworkPlugins/group/custom-weave/Start (gopogh) 34.78 (chart)
Docker_Linux_crio_arm64 TestStartStop/group/embed-certs/serial/DeployApp (gopogh) 41.38 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 49.3s    | 48.1s               |
| enable ingress | 34.0s    | 32.1s               |
+----------------+----------+---------------------+

Times for minikube start: 53.1s 47.9s 46.6s 47.5s 51.4s
Times for minikube (PR 11483) start: 48.7s 47.8s 48.3s 48.5s 47.4s

Times for minikube ingress: 32.8s 31.8s 34.8s 38.8s 31.8s
Times for minikube (PR 11483) ingress: 33.3s 30.3s 31.8s 31.7s 33.3s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 22.1s    | 21.7s               |
| enable ingress | 30.5s    | 31.2s               |
+----------------+----------+---------------------+

Times for minikube start: 21.6s 22.2s 23.1s 22.0s 21.7s
Times for minikube (PR 11483) start: 20.8s 21.7s 21.8s 21.8s 22.4s

Times for minikube (PR 11483) ingress: 35.0s 28.5s 26.5s 38.5s 27.5s
Times for minikube ingress: 26.5s 26.5s 31.5s 34.5s 33.5s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 11483) |
+----------------+----------+---------------------+
| minikube start | 41.9s    | 43.9s               |
| enable ingress |          |                     |
+----------------+----------+---------------------+

Times for minikube start: 42.4s 43.4s 36.7s 44.2s 42.8s
Times for minikube (PR 11483) start: 44.1s 43.7s 43.7s 44.0s 43.9s

@BlaineEXE
Copy link
Contributor Author

Thanks @sharifelgamal. Would you be able to help me find logs from the script deploy/iso/minikube-iso/package/automount/minikube-automount as it runs on test VMs in the failing tests? I'm having a little trouble figuring out if those are collected and where I can find them if they are.

@sharifelgamal
Copy link
Collaborator

I don't personally know where those are collected, if I had to guess, I'd say there are not. Either way, if the logs aren't exported, they're eventually lost.

I'm not sure I see where the problem is, all the hyperkit tests passed. I'm comfortable merging this if you are.

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_crio TestErrorSpam/setup (gopogh) 0.00 (chart)
Docker_macOS TestFunctional/parallel/MountCmd/any-port (gopogh) 0.00 (chart)
KVM_Linux_containerd TestScheduledStopUnix (gopogh) 0.00 (chart)
Docker_macOS TestFunctional/parallel/BuildImage (gopogh) 2.44 (chart)
Docker_Linux TestKubernetesUpgrade (gopogh) 4.76 (chart)
Docker_Linux TestScheduledStopUnix (gopogh) 4.76 (chart)
KVM_Linux TestAddons/parallel/Ingress (gopogh) 5.00 (chart)
Docker_Linux_containerd_arm64 TestNetworkPlugins/group/kindnet/NetCatPod (gopogh) 5.56 (chart)
Docker_Linux_crio_arm64 TestKubernetesUpgrade (gopogh) 5.56 (chart)
Docker_macOS TestMultiNode/serial/RestartKeepsNodes (gopogh) 7.32 (chart)
Docker_Linux_crio_arm64 TestStartStop/group/embed-certs/serial/SecondStart (gopogh) 9.09 (chart)
KVM_Linux TestScheduledStopUnix (gopogh) 9.52 (chart)
Docker_macOS TestMultiNode/serial/StartAfterStop (gopogh) 9.76 (chart)
Docker_Linux_crio TestStartStop/group/old-k8s-version/serial/Pause (gopogh) 10.00 (chart)
Docker_macOS TestMultiNode/serial/AddNode (gopogh) 12.20 (chart)
Docker_Linux_crio TestMultiNode/serial/PingHostFrom2Pods (gopogh) 20.00 (chart)
Docker_macOS TestNetworkPlugins/group/false/DNS (gopogh) 21.05 (chart)
Docker_macOS TestNetworkPlugins/group/bridge/Start (gopogh) 28.95 (chart)
Docker_macOS TestNetworkPlugins/group/custom-weave/Start (gopogh) 36.84 (chart)
Docker_macOS TestNetworkPlugins/group/kubenet/Start (gopogh) 39.47 (chart)
KVM_Linux_crio TestJSONOutput/stop/parallel/DistinctCurrentSteps (gopogh) 40.00 (chart)
KVM_Linux_crio TestJSONOutput/stop/parallel/IncreasingCurrentSteps (gopogh) 40.00 (chart)
Docker_Linux_crio TestStartStop/group/embed-certs/serial/Stop (gopogh) 42.50 (chart)
Docker_macOS TestNetworkPlugins/group/calico/DNS (gopogh) 42.86 (chart)
Docker_Linux_crio_arm64 TestFunctional/parallel/PersistentVolumeClaim (gopogh) 52.78 (chart)
Docker_Linux_crio_arm64 TestMultiNode/serial/DeployApp2Nodes (gopogh) 52.78 (chart)
Docker_macOS TestNetworkPlugins/group/kindnet/Start (gopogh) 60.53 (chart)
Docker_macOS TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 65.71 (chart)
Docker_Linux_crio_arm64 TestMultiNode/serial/PingHostFrom2Pods (gopogh) 66.67 (chart)
Docker_Linux_containerd_arm64 TestNetworkPlugins/group/bridge/Start (gopogh) 77.42 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@BlaineEXE
Copy link
Contributor Author

@sharifelgamal I think I feel good if this is merged. The flakes here don't seem to be far off what I see in other PRs, and I have tested this pretty thoroughly on my side with starting, adding, stopping, and re-starting the cluster with the extra disks.

Copy link
Collaborator

@sharifelgamal sharifelgamal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Thanks for your contribution and your patience!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: BlaineEXE, sharifelgamal

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 28, 2021
@sharifelgamal sharifelgamal merged commit faca0fd into kubernetes:master Jul 28, 2021
@BlaineEXE BlaineEXE deleted the add-extra-disks-to-hyperkit-vms branch July 28, 2021 21:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ability to add disks to the minikube VM on Hyperkit?
8 participants