Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM has 50% resting CPU usage when idle #3207

Closed
samuela opened this issue Oct 2, 2018 · 54 comments
Closed

VM has 50% resting CPU usage when idle #3207

samuela opened this issue Oct 2, 2018 · 54 comments
Labels
area/performance Performance related issues co/hyperkit Hyperkit related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/macos priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@samuela
Copy link
Contributor

samuela commented Oct 2, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Please provide the following details:

Environment: macOS 10.13.6

Minikube version (use minikube version): v0.29.0

  • OS (e.g. from /etc/os-release): macOS 10.13.6
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): hyperkit
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.29.0
  • Install tools: n/a
  • Others: n/a

What happened:
I just installed and set up a fresh minikube cluster. CPU usage is pegged at ~50% even though no pods have been launched and nothing is happening on the cluster. I've observed the same behavior across both hyperkit and VirtualBox.

screen shot 2018-10-01 at 9 05 14 pm

I ran minikube addons enable heapster to get some insight into where all the CPU is going. It looks like kube-apiserver-minikube and kube-controller-manager-minikube are the primary offenders.

What you expected to happen:
I expected the CPU usage to fall to basically zero when at rest. I understand that this may just be the baseline CPU usage for some of these services (liveness checks, etc). But when running in minikube mode, it'd really be nice to slow down the CPU consumption so that we don't kill all of our laptop batteries.

How to reproduce it (as minimally and precisely as possible):
Create a minikube cluster on macOS with the appropriate versions.

Output of minikube logs (if applicable): n/a

Anything else do we need to know: n/a

@tstromberg tstromberg added os/macos co/hyperkit Hyperkit related issues labels Oct 2, 2018
@kvokka
Copy link

kvokka commented Oct 4, 2018

The same issue on Linuix.

Environment: Ubuntu 18.04

Minikube version (use minikube version): v0.29.0

  • OS: (e.g. from /etc/os-release): VERSION="18.04.1 LTS (Bionic Beaver)"
  • VM Driver: (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
  • ISO version: (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.29.0
  • Install tools: n/a
  • Others: n/a

What happened:

screenshot from 2018-10-04 01-51-27

This is the consumption on 6700HQ CPU with brand new installation with out any pods.

With KVM2 driver I get the same result.

@corneliusweig
Copy link
Contributor

Same for me on Gentoo Linux. I have tried kvm2 and virtualbox driver but they both have an idle CPU load of ~50%. I had the same behavior with minikube 0.28.{0,2}.

@afbjorklund
Copy link
Collaborator

Guess it needs to be profiled... Which Kubernetes version are you using ? v1.10.0 (default) ?

https://github.com/kubernetes/community/blob/master/contributors/devel/profiling.md

@tstromberg tstromberg added the area/performance Performance related issues label Oct 6, 2018
@samuela
Copy link
Contributor Author

samuela commented Oct 6, 2018

@afbjorklund Yeah, I'm running the default version.

@ianseyer
Copy link

ianseyer commented Oct 9, 2018

DevOps guy here - this is preventing some of our devs from working locally.

I am running Arch personally (minikube v0.29) and get 10-30% spikes with my full application running which seems acceptable, but others (Ubuntu 18, minikube v0.30) are getting near-constant 40% usage with no pods live on both the kvm2 and virtualbox driver.

@afbjorklund
Copy link
Collaborator

@ianseyer are you using --vm-driver none on Arch ? and the others minikube.iso

@ianseyer
Copy link

ianseyer commented Oct 9, 2018

I am using kvm2.

@corneliusweig
Copy link
Contributor

I was playing around found that docker alone creates a ~30% load on my host system. What I did was stop the kubelet.service and restart docker.service so that all containers are gone. So it might not only be a kubernetes problem after all.

@samuela
Copy link
Contributor Author

samuela commented Oct 11, 2018

@corneliusweig I'm not sure about your system, but I just checked on my machine and I don't think that's what's going on. The resting docker CPU load is around ~6%.

macOS 10.14
MacBook Pro (13-inch, 2017, Two Thunderbolt 3 ports)
2.3 GHz Intel Core i5

I ran docker run -it --rm ubuntu and let it sit for a minute. I'm using Docker for Mac with the hyperkit driver.

@kvokka
Copy link

kvokka commented Oct 18, 2018

Minikube version (use minikube version): v0.30.0

OS: (e.g. from /etc/os-release): MacOS 10.14
VM Driver: (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualbox
ISO version: (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): v0.30.0
Install tools: n/a
Others: n/a

Very interesting notice,

If you run minikube with default CPU setup (which for me is 2 CPU), you will get total consumption in IDLE ~30%, but if you change settings to 6 CPU, the you will get average consumption for ~70%. And if you use 1 CPU core it will be the ~25% in IDLE. Less cores -> less consumption. Paradox :)

And all this checks was done with no pods setup.

@corneliusweig
Copy link
Contributor

I set up kubernetes inside an LXC container. That way I have some isolation and no cost for the virtualization. It's not as easy to set up as minikube, but if somebody wants to try it out, I have published my notes here https://github.com/corneliusweig/kubernetes-lxd.

@Laski
Copy link

Laski commented Dec 7, 2018

Same here on Linux, with spikes that reach 90% usage. Tried with vbox and with kvm2

➜  ~ minikube version
minikube version: v0.30.0
➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
➜  ~ 

@pietervogelaar
Copy link

I also experience high cpu usage on my MacBook Pro (macOS Mojave 10.14.2). It also causes high CPU temperature which makes my fans run at full speed, including the annoying noise.

Minikube v0.30.0
VirtualBox 5.2.22

@douglascamata
Copy link

Around 30% of constant cpu usage here on my Macbook (i7 2.7 Ghz) after a clean setup.

Minikube: 0.32.0
Virtualbox: 6.0.0
K8s version: v1.12.4 (default on this minikube version)

@neerolyte
Copy link

neerolyte commented Jan 16, 2019

I'm seeing around 30-50% CPU from the relevant VBoxHeadless process on the parent host and only 1-5% CPU visible in top within the minikube VM (minikube ssh) - is it possible to upgrade vbox tools after deploying the VM to confirm it's not just an issue with missmatched versions there?

Note: I ask this as a question as I can't see any familiar linux package managers inside the VM and it's missing basic tools (bzip2, tar) to even start the installation.

@dperetti
Copy link

Still not fixed ? I thought Kubernetes was mainstream 😏

@tstromberg tstromberg changed the title 50% resting CPU usage hyperkit: 50% resting CPU usage Jan 23, 2019
@tstromberg tstromberg added this to the v1.0.0-candidate milestone Jan 23, 2019
@tstromberg tstromberg added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 23, 2019
@tstromberg tstromberg changed the title hyperkit: 50% resting CPU usage hyperkit: VM has 50% resting CPU usage when idle Jan 23, 2019
@tstromberg tstromberg added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jan 23, 2019
@tstromberg tstromberg modified the milestones: v1.0.0-candidate, v0.34.0 Jan 24, 2019
@aberres
Copy link

aberres commented Jan 25, 2019

Same here with minikube 0.33.1 and vbox 6.0 running on Mac OS 10.14.

Kind of a showstopper for pushing my colleague's workflows into the direction of local Kubernetes...

@tstromberg tstromberg removed this from the v0.34.0 milestone Feb 1, 2019
@ashconnell
Copy link

@wojciechka this is interesting, what was your idle cpu usage before and after this change and the specs of your machine?

@dwilkins
Copy link

Just tried this on Fedora 31, but haven't done a scientific test. I think I can see the surges every 5-sec without the @wojciechka tweak, and see them go away (or diminish) after making the tweak. I still think that minikube is using more cpu than microk8s. Minikube (over microk8s) gives the easy ability to have multiple clusters that can be started / stopped with states saved via profiles.

@jroper
Copy link

jroper commented Nov 25, 2019

I actually found my problem - it was swapping. The default of 2GB memory with no minikube options tuned was not enough for anything to happen, even creating a namespace, without swapping.

@mbaeuerle
Copy link

@wojciechka thanks for sharing this valuable workaround with us!
Is there a way to configure minikube to always start up the cluster with this configuration? That would be awesome!

@wojciechka
Copy link

Here is some additional information on this.

Regarding where I am running this - I am running minikube on Linux. The host machine is a sLinux Debian 9 Stretch, with i3-2100 CPU (2 cores, 2 threads at 3.1 GHz). The machine has 32GB of RAM. The host machine runs some background tasks, but it is not under anything close to a heavy load - the load is below 0.5 when I do not have minikube started. For testing I do not run any workloads on minikube.

Minikube has 4 CPUs and 16GB of RAM assigned to it. I also do not run any workloads in minikube when I do tests.

When the TEST_ADDON_CHECK_INTERVAL_SEC is set to 60, on the host I am seeing around 40% of a single core/thread (%CPU reported by top) being used by the VirtualBox process, when the cluster is not really doing anything. The load average reported from inside the VM is around 0.3 - which is definitely ok.

Output from top via minikube ssh session:

  PID USER      PR  NI    VIRT    RES  %CPU  %MEM     TIME+ S COMMAND
 2995 root      20   0 1962.9m 103.8m   7.2   0.6  33:08.40 S /var/lib/minikube/binaries/v1.14.3/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/b+
 3420 root      20   0  465.7m 324.8m   4.6   2.0  22:55.80 S kube-apiserver --advertise-address=192.168.99.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minik+
 2464 root      20   0 1507.4m  85.6m   3.2   0.5  19:49.17 S /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /e+
 3435 root      20   0   10.1g  65.4m   2.8   0.4  12:03.58 S etcd --advertise-client-urls=https://192.168.99.100:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --d+
 3460 root      20   0  212.5m 101.8m   2.2   0.6   8:29.18 S kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/c+
 6835 root      20   0  139.5m  32.1m   0.6   0.2   1:41.11 S /coredns -conf /etc/coredns/Corefile
 5019 root      20   0  136.2m  31.7m   0.4   0.2   0:34.85 S /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=minikube
 6575 root      20   0  139.5m  32.4m   0.4   0.2   1:42.63 S /coredns -conf /etc/coredns/Corefile
14213 docker    20   0   23.5m   2.6m   0.4   0.0   0:00.70 S sshd: docker@pts/0
   10 root      20   0    0.0m   0.0m   0.2   0.0   1:10.12 R [rcu_sched]
 1373 root      20   0   87.1m  32.0m   0.2   0.2   0:06.00 S /usr/lib/systemd/systemd-journald
 2473 root      20   0 2623.0m  44.0m   0.2   0.3   2:46.85 S containerd --config /var/run/docker/containerd/containerd.toml --log-level info
 3453 root      20   0  138.7m  37.0m   0.2   0.2   0:52.14 S kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true
 6560 root      20   0  435.1m  52.1m   0.2   0.3   0:12.63 S /storage-provisioner
12184 root      20   0    0.0m   0.0m   0.2   0.0   0:00.29 I [kworker/3:1-events]

When I change the setting to use the current minikube default of 5 seconds for TEST_ADDON_CHECK_INTERVAL_SEC, I see that average CPU usage of the VM goes up to around 70-80% of a single core/thread (%CPU reported by top) being used by the VirtualBox process. The load average inside the VM is around 0.4-0.6, so it seems ok, but it is definitely higher.

Here's output from top in minikube ssh with the interval set to 5 seconds:

32304 root      20   0  138.7m  64.3m   6.2   0.4   0:00.31 S /usr/local/bin/kubectl apply -f /etc/kubernetes/addons -l kubernetes.io/cluster-service!=true,addonmanager.kubernetes.io/mode=Reconcile +
 3420 root      20   0  465.7m 324.8m   5.6   2.0  23:38.33 S kube-apiserver --advertise-address=192.168.99.100 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minik+
 2995 root      20   0 1962.9m 103.8m   5.4   0.6  33:56.49 S /var/lib/minikube/binaries/v1.14.3/kubelet --allow-privileged=true --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/b+
 3435 root      20   0   10.1g  66.6m   3.2   0.4  12:27.43 S etcd --advertise-client-urls=https://192.168.99.100:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --d+
 2464 root      20   0 1507.4m  83.5m   2.6   0.5  20:14.77 S /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /e+
 3460 root      20   0  212.5m 101.8m   2.4   0.6   8:45.72 S kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/c+
 2473 root      20   0 2623.0m  43.9m   0.8   0.3   2:49.71 S containerd --config /var/run/docker/containerd/containerd.toml --log-level info
 6575 root      20   0  139.5m  32.6m   0.6   0.2   1:45.68 S /coredns -conf /etc/coredns/Corefile
 3355 root      20   0  106.3m   6.7m   0.4   0.0   0:46.19 S containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/af42bf1f14cacabcd6e58392e+
 6835 root      20   0  139.5m  32.1m   0.4   0.2   1:44.27 S /coredns -conf /etc/coredns/Corefile
    1 root      20   0   40.9m   8.4m   0.2   0.1   3:58.69 S /sbin/init noembed norestore
    9 root      20   0    0.0m   0.0m   0.2   0.0   0:05.60 S [ksoftirqd/0]
   10 root      20   0    0.0m   0.0m   0.2   0.0   1:11.63 I [rcu_sched]
 5475 root      20   0  136.2m  46.9m   0.2   0.3   0:55.57 S /go/bin/all-in-one-linux --log-level debug
14213 docker    20   0   23.5m   2.6m   0.2   0.0   0:01.19 S sshd: docker@pts/0
30307 root      20   0   17.6m   2.8m   0.2   0.0   0:00.23 S bash /opt/kube-addons.sh
30616 root      20   0    0.0m   0.0m   0.2   0.0   0:00.02 I [kworker/0:0-events]

While top does not report significant usage by any process, it seems higher.

I also see the kubectl apply being invoked very often when running it with the current default of 5 second interval.

I have also tried to set the TEST_ADDON_CHECK_INTERVAL_SEC to 30 and from what I have measured, this also drops CPU usage to around 40-50% of single core/thread (%CPU reported by top) as seen by the host.

The interval of 5 seconds was added by the following commit:

Update addon-manager to v9.0.2, disable master negotiation and increase reconcile frequency

Perhaps it would be a good idea to revert the change to 5 seconds, or at least change it to a slightly longer interval like 30 seconds?

This was added 2 months ago, so I suspect my reporting of the problem does not overlap with the original issues from October 2018.

I am not sure if this should be tracked under same issue or separate issue.

@antonovicha
Copy link

Did what @wojciechka proposed together with mem increase to 4 Gb from default 2 Gb (as suggested by @jroper). It drops CPU usage of my laptop (2c/4t) from 70-80% to 40-50% but it is does not solve a problem. CPU is still boosted from idle clocks and fan is working on high revs.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2020
@antonovicha
Copy link

Since issue is marked as stale my solution was is to end up with microk8s in VBox. Works like a charm, no abnormal CPU consumption.

@rhyek
Copy link

rhyek commented Feb 27, 2020

I went with KinD (kubernetes in docker). Highly recommended as it works well on linux and macos (haven't tested windows, although it is supported). Bonus is you can use it for CI/CD as well.

@mitchej123
Copy link

Similar story here, I went with microk8s or docker-for-mac, they don't have the high resting CPU usage.

@sharifelgamal
Copy link
Collaborator

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 27, 2020
@wstrange
Copy link

FWIW, recent versions of minikube are much better. Idle CPU on my mac with Hyperkit is about 1/3 of a CPU (33%).
I think the major culprit was the add on manager, which has been removed in 1.7.x

@sanarena
Copy link

33% is still bad. I am having a constant 30% with the latest version of minikube on vmware. It was 50% with virtualbox.

@mc0
Copy link

mc0 commented Apr 23, 2020

It appears most of the CPU usage is kube-apiserver responding to lease renewals, etcd being queried by kube-apiserver, and kubelet running its routine tasks. I attempted to adjust some config and had some small success.

Note: Kubelet doesn't allow adjusting its housekeeping period (which I suspect is an intensive task). See kubernetes/kubernetes#89936

What I Changed

I'm posting this in the interest of sharing my experience but I ultimately gave up on using minikube. The CPU usage from Kubernetes on a macOS laptop is just too high to roll out to a team (even trying to ignore how wasteful xhyve is). Your success may vary and any improvements will disappear after restarting minikube.

These changes could potentially be applied with ~/.minikube/files/ (see adding files note) but I didn't attempt this. Similarly, these could be adjusted in kubeconfig.go and with custom Kubernetes configuration (e.g. --extra-config 'controller-manager.leader-elect=false' and probably requires v1.9.2) but I didn't try these approaches.

Relevant Files

/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
/var/lib/kubelet/config.yaml

Controller Changes

Disabling Leader Elections (~5% CPU)

Minikube is ran in a single instance that should not have multiple controller-managers or schedulers.
These elections cause a significant amount of CPU usage in the kube-apiserver. These could be adjusted to higher lease/renew/retry (e.g. 60s, 55s, 15s) if one wants elections to remain enabled.
Add this to kube-controller-manager.yaml and kube-scheduler.yaml:

    - --leader-elect=false

Disabling Watch Cache (~5% CPU)

We are likely not running a significant amount of watches and I suspect this is an expensive operation.
Add this to the kube-apiserver.yaml:

    - --watch-cache=false

Small kubelet Tweak (~2% CPU)

Kubelet has a few operations that it needs to do routinely to stay healthy as a node.
These changes adjust those frequencies (and makes the controller-manager aware).

Add to kube-controller-manager.yaml:
- --node-monitor-period=30s
- --node-monitor-grace-period=180s

Adjust these in the kubelet config.yaml and then restart kubelet with systemctl daemon-reload && systemctl restart kubelet:

httpCheckFrequency: 30s
nodeStatusUpdateFrequency: 30s
syncFrequency: 60s

Small Controller Manager Syncs Tweak (~2% CPU)

Add to kube-controller-manager.yaml:

    - --horizontal-pod-autoscaler-sync-period=60s
    - --pvclaimbinder-sync-period=60s

@priyawadhwa
Copy link

Hey everyone! We've made some overhead improvements in the past few months:

  1. Removed the addon manager and rewrote it (Remove polling from addon manager #6046)
  2. Set leader-elect=false by default (Set leader-elect=false for scheduler and controller manager #8431)
  3. Scaled down coredns to 1 pod (Reduce coredns replicas from 2 to 1 #8552)

On my machine, with the hyperkit driver, resting CPU of an idle cluster has reduced by 52% to on average 18% of a core.

I'm going to close this issue since I think it's been addressed, but if anyone is still seeing high CPU usage with the latest minikube version, please comment and reopen this issue by including /reopen in your comment. Thank you!

@r3econ
Copy link

r3econ commented Aug 25, 2020

Is upgrading to minikube version v1.12.3 enough to fix this?
I upgraded, restarted, and performance-wise I see no difference whatsoever.

@sanarena
Copy link

Same here. How to make use of this fix?

@priyawadhwa
Copy link

@r3econ @sanarena yah, upgrading should be enough.

If you're still seeing high performance, could you open a new issue for it? We can track it there.

@sanarena
Copy link

sanarena commented Aug 28, 2020

@priyawadhwa Thanks, updated but still having performance issue. Opened a new issue #9104 which got closed right after. seems like there is no solution anymore.

@paneq
Copy link

paneq commented Sep 15, 2020

I went with microk8s or docker-for-mac, they don't have the high resting CPU usage.

@mitchej123 I tested docker-for-mac, minikube, microk8s, k3d and they all have the same issue. Some of them run as root and you need to use sudo [h]top instead of [h]top to see the effect, but they all suffer from the same problem.

@bodhi-one
Copy link

Was the issue actually resolved? Reports from various github sections including Docker for Mac and kubernetes itself is pointing to behavior of high polling into etcd as a root cause. The adjustments mentioned by priyawadhwa when closing the issue seem to be addressing what could be done in minikube. But core k8s issues seem to remain and are seen in various configurations/deployments.

kubernetes/kubernetes#75565
docker/for-mac#3065
kubernetes/kubernetes#100415

@mc0
Copy link

mc0 commented Mar 23, 2021

@talj-us-ibm-com At this point, I think the work referenced by @priyawadhwa does address most of the overhead that minikube can control (other adjustments that I brought up could cause things to be inconsistent longer). I had mentioned previously that a big portion of the overhead is kubernetes/kubernetes#75565. (I had opened kubernetes/kubernetes#89936 as a feature path towards addressing this in minikube.)

At the risk of being a bit ranty (and in reference to docker/for-mac#3065), all of that is on top of Docker for Mac having disk performance issues (the recent gRPC FUSE changes help some), virtualization cpu performance issues, and a whole set of hoops that Docker has to jump through to make docker even run on a Mac.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/performance Performance related issues co/hyperkit Hyperkit related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. os/macos priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests