-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory option is not used #21
Comments
The memory setting was removed intentionally to not waste su by requesting more memory than the cpus can support. Adding a lookup table for the per-cpu memory of different queues sounds like a good idea |
There are legitimate use cases for people wanting more memory than the nominal GB/cpu figure. Adele reported long wait times when requesting a full broadwell node for a notebook, but the full node is only really required for the large memory. So I suggested requesting fewer CPUs but most of the memory to see if this helps with queue time, as she was not actually utilising all the cpus in her dask cluster. I would favour putting the memory option back, but issuing a warning about the SU usage implications. |
Sure, fine with me to add to the existing warnings on cpu count |
The script could even report the SUs the job will consume if it runs for the walltime requested. |
Yeah that would be nice, I've put it off cause that's annoying to do in bash |
The help text says memory is a user-settable command line option
https://github.com/coecms/nci_scripts/blob/master/gadi_jupyter#L20
but this is not implemented
https://github.com/coecms/nci_scripts/blob/master/gadi_jupyter#L118
Also trying to use this option causes the script to emit the usage message without reporting the erroneous option.
With the
normalbw
andnormalise
queues now up and running the default memory request could be queue dependent, with 9.1GB/cpu for Broadwell and 6GB/cpu for Skylake.The text was updated successfully, but these errors were encountered: