Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNM] Try two worker processes per host #915

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions cluster_kwargs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ default:
package_sync: true
wait_for_workers: true
scheduler_vm_types: [m6i.large]
_n_worker_specs_per_host: 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this on prod now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC a fair assessment would require us to double the machine sizes and reduce cluster sizes by half. Otherwise the same-host workers have much less memory to work with and are much more likely to run into spilling and OOM

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Results of this A/B test are interesting but likely require a bit further analysis

image

We can see a couple of tests that regress significantly while others are getting much worse.

My best read without digging deeper is that most/almost all of our tests require a bit of spilling and that all test cases that are spilling behave really poorly, likely because disk is just busy?

backend_options:
spot: true
spot_on_demand_fallback: true
Expand Down