-
-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
numpy runs out of memory #561
Comments
Thank you very much for reporting this. The code fails when SMAC tries to compute the acquisition function for 10000 configurations. A practical solution would be to reduce this number to something like 1000, by passing either |
I'll try that, thanks! |
We changed the code to
but still get the same error messages. Can it be that the setting is not picked up by SMAC? If we don't set it explicitly, shouldn't the default be 5000 instead of 10000? |
In smac_hpo_facade.py I found the following code snippet:
I think the value should only be overriden if it hasn't been set by the user. |
Hi Jendrik, I would say that users should not change options such as Best, |
Could it be that the problem is that no tested configuration is better than the initial incumbent? |
I don't think so. But I would need either a toy example to reproduce the problem on my machine or at the very least a debug output s.t. I have a chance to guess the problem. |
I have reduced the run to a toy example (test-numpy.py). When I use "ulimit -Sv 600000" and then "rm -rf smac && ./test-numpy.py", I get "MemoryError: Unable to allocate array with shape (10000, 10, 9) and data type bool" after 4 seconds. Here is the script:
|
Hi, Thank you for the example. Some comments:
Best, |
Thanks for your comments! Reg. 1: I agree that it would be better to not limit the virtual memory, but we have to make sure that the SMAC runs don't use too much memory when we run them in parallel on shared compute nodes on our cluster. Do you know an alternative way of limiting memory for this setting? Reg. 2: Even if this only occurs for small configuration spaces, I think it would be good if SMAC stopped when it has tried all configurations. This would make debugging much easier. Reg. 3: Yes, 160MB is definitely fine. |
I completely agree regarding 2., but this is not trivial to implement for complex configuration spaces with conditionals and forbidden constraints. Essentially this leads to counting all solutions of a constraint satisfaction problem. For simple configuration spaces (without forbidden constraints), this should be feasible. We will consider to implement a solution for such configuration spaces in a future release. Regarding 1, you could try to use |
Thanks! I'll try that. |
I just found out that BTW, setting |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Description
SMAC runs out of memory for some of our scenarios (it has 3.5 GiB available). It catches this and aborts gracefully, but it would be great if there was some way of reducing the amount of memory that SMAC tries to reserve via numpy.
Here is the error traceback:
Steps/Code to Reproduce
I don't have a minimal example to reproduce the error, but here are our logs and SMAC output files: https://ai.dmi.unibas.ch/_tmp_files/seipp/smac-numpy-out-of-memory.tar.gz
You can find the stdout and stderr output in run.log and run.err. The smac files are under smac/run_*.
Do you have any suggestion how to reduce the memory usage?
Versions
0.11.1
The text was updated successfully, but these errors were encountered: