-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PSE to provide analysis of resources needed by each SBG Task #68
Comments
i can't give an upper limit (because the more the better), but the test runs i've done usually are limited by ISO fit runs. For ISO Fit, we used a memory optimized instance 4 vcpu/32 GB ram which worked pretty well for all other tasks, so i'd start with that as the upper bound for all. |
For what its worth, the actual sister worked used c5.9xlarge (36 cpu, 72GB ram) instances for ISOFit, so maybe after enough memory, it's CPU constrained (there are certainly multi-threading capabilities in the ISOFIT code). |
lastly (in my chain of thoughts), this is where we actually need the SPS to provide telemetry for a run, so that a user can tune the system based on actual executions in the system (and eventually give you an 'estimate' on the costs |
That is a big machine.... |
Agreed, we'll only move up to a large machine if we want to try and run an operational scenario to benchmark against. |
Closing ticket since Mike provided the best possible estimate at this point. We still need to demonstrate scaling. |
Description: In order to be able to scale the SBG end-to-end workflow, SPS needs to know what resources (memory and CPUs) need to be allocated for executing each task. Upper limits are needed.
The text was updated successfully, but these errors were encountered: