Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose smarter defaults for --memory #5021

Closed
tstromberg opened this issue Aug 8, 2019 · 6 comments · Fixed by #6900
Closed

Choose smarter defaults for --memory #5021

tstromberg opened this issue Aug 8, 2019 · 6 comments · Fixed by #6900
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@tstromberg
Copy link
Contributor

tstromberg commented Aug 8, 2019

While working with @priyawadhwa on a demo, I noticed that minikube was locking up. Increasing the memory size fixed the problem. Also, the first thing our documentation states after installation is to increase the memory size.

What if we were able to improve the first-start experience for most users by default, by dynamically selecting a more appropriate option for --memory?

My recommendation is to change the default 2GB setting to 37.5% of available memory by default: but never less than 2.25GB or more than 8GB. For instance:

  • 4GB host -> 2.25GB VM
  • 6GB host -> 2.25GB VM
  • 8GB host -> 3GB VM
  • 16GB host -> 6GB VM
  • 32GB host -> 8GB VM
  • 64GB host -> 8GB VM

The balance can be fine tuned, but my main argument is that a developer on a 16GB machine can probably live with a 6GB VM.

@tstromberg tstromberg added kind/feature Categorizes issue or PR as related to a new feature. triage/discuss Items for discussion labels Aug 8, 2019
@tstromberg tstromberg changed the title Improve the default --memory setting (dynamic sizing?) Choose smarter defaults for --memory Aug 8, 2019
@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 8, 2019

This should leave some room for multi-node as well (eventually) ? Currently starting 4 VMs, which is 8G (when did the “2.25” happen?)

Some hypervisors can do “ballooning” and adjust memory on the fly (or more like give back what they stole from the real OS, but)
https://www.virtualbox.org/manual/ch04.html#guestadd-balloon

@Zyqsempai
Copy link
Contributor

@tstromberg Do you have any idea how we can get the current available memory size?

@tstromberg
Copy link
Contributor Author

@Zyqsempai - It's unclear if gopsutil, which we already have imported, gives us what we need. I worry that it may be conflating physical memory with swap:

https://github.com/shirou/gopsutil#usage

As an alternative, there is: https://github.com/pbnjay/memory

@afbjorklund
Copy link
Collaborator

I thought swap was in a separate Stat ? But haven’t actually verified

@tstromberg tstromberg added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Aug 15, 2019
@tstromberg tstromberg removed the triage/discuss Items for discussion label Sep 30, 2019
@priyawadhwa
Copy link

Related to #94

@medyagh medyagh added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Mar 2, 2020
@tstromberg tstromberg self-assigned this Mar 5, 2020
@afbjorklund
Copy link
Collaborator

Did we find out where the extra 0.25 came from ? i.e. why it requires even more memory now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants