Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add future::nbrOfWorkers to future.R #438

Closed
wants to merge 2 commits into from
Closed

Conversation

kendonB
Copy link
Contributor

@kendonB kendonB commented Jun 29, 2018

Summary

future parallelism incorrectly uses the jobs argument to make. This PR fixes it so that it uses future::nbrOfWorkers.

Related GitHub issues

  • Ref: # None.

Checklist

  • I have read drake's code of conduct, and I agree to follow its rules.
  • I have read the guidelines for contributing.
  • I have listed any substantial changes in the development news.
  • [NA] I have added testthat unit tests to tests/testthat to confirm that any new features or functionality work correctly.
  • [Travis did] I have tested this pull request locally with devtools::check()
  • This pull request is ready for review.
  • I think this pull request is ready to merge.

@codecov-io
Copy link

codecov-io commented Jun 29, 2018

Codecov Report

Merging #438 into master will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@          Coverage Diff          @@
##           master   #438   +/-   ##
=====================================
  Coverage     100%   100%           
=====================================
  Files          66     66           
  Lines        5363   5363           
=====================================
  Hits         5363   5363
Impacted Files Coverage Δ
R/future.R 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1015170...2fb1c6c. Read the comment docs.

@wlandau
Copy link
Member

wlandau commented Jun 29, 2018

Do future-based make()s really ignore config$jobs? Now that drake uses custom schedulers, it should no longer rely on future::nbrOfWorkers. In make(parallelism = "future"), drake sends out individual futures and manages them with the calling R session. In make(parallelism = "future_lapply"), drake calls future_lapply() to spin up a fixed number of persistent workers in advance, and governing master process sends instructions to those workers. These changes were recent.

@kendonB
Copy link
Contributor Author

kendonB commented Jun 29, 2018

My understanding was that you wanted config$jobs to just be used for the light parallelization stages, and then defer to the future plan for the number of workers when using future_lapply. If so, you want to ignore config$jobs when using future-based makes?

When I tested both future and future_lapply using the most recent CRAN version, they both incorrectly used jobs rather than nbrOfWorkers when submitting SLURM jobs.

I guess I'm a bit confused by the first part of your question.

@wlandau
Copy link
Member

wlandau commented Jun 30, 2018

My understanding was that you wanted config$jobs to just be used for the light parallelization stages, and then defer to the future plan for the number of workers when using future_lapply. If so, you want to ignore config$jobs when using future-based makes?

I have been trying to move drake away from staged parallelism as much as possible because we lose parallel efficiency in situations similar to #168. (Can't totally remove it because of situations like #369 (comment).) As I explained in #437, the reliance on config$jobs is part of drake's new behavior for the future-based backends.

A lot has changed, and you might find this recent rOpenSci tech note to be a useful high-level summary.

@wlandau wlandau closed this Jun 30, 2018
@kendonB kendonB deleted the patch-7 branch July 4, 2018 20:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants