Skip to content

Releases: ropensci/drake

max_expand and text_drake_graph()

19 May 15:24
Compare
Choose a tag to compare

Version 7.3.0

Bug fixes

  • Accommodate rlang's new interpolation operator {{, which was causing make() to fail when drake_plan() commands are enclosed in curly braces (#864).
  • Move "config$lock_envir <- FALSE" from loop_build() to backend_loop(). This makes sure config$envir is correctly locked in make(parallelism = "clustermq").
  • Convert factors to characters in the optional .data argument of map() and cross() in the DSL.
  • In the DSL of drake_plan(), repair cross(.data = !!args), where args is an optional data frame of grouping variables.
  • Handle trailing slashes in file_in()/file_out() directories for Windows (#855).
  • Make .id_chr work with combine() in the DSL (#867).
  • Do not try make_spinner() unless the version of cli is at least 1.1.0.

New features

  • Add functions text_drake_graph() (and r_text_drake_graph() and render_text_drake_graph()). Uses text art to print a dependency graph to the terminal window. Handy for when users SSH into remote machines without X Window support.
  • Add a new max_expand argument to drake_plan(), an optional upper bound on the lengths of grouping variables for map() and cross() in the DSL. Comes in handy when you have a massive number of targets and you want to test on a miniature version of your workflow before you scale up to production.

Enhancements

  • Delay the initialization of clustermq workers for as long as possible. Before launching them, build/check targets locally until we reach an outdated target with hpc equal to FALSE. In other words, if no targets actually require clustermq workers, no workers get created.
  • In make(parallelism = "future"), reset the config$sleep() backoff interval whenever a new target gets checked.
  • Add a "done" message to the console log file when the workflow has completed.
  • Replace CodeDepends with a base R solution in code_to_plan(). Fixes a CRAN note.
  • The DSL (transformations in drake_plan()) is no longer experimental.
  • The callr API (r_make() and friends) is no longer experimental.
  • Deprecate the wildcard/text-based functions for creating plans: evaluate_plan(), expand_plan(), map_plan(), gather_plan(), gather_by(), reduce_plan(), reduce_by().
  • Change some deprecated functions to defunct: deps(), max_useful_jobs(), and migrate_drake_project().

Improved visuals

19 Apr 02:43
Compare
Choose a tag to compare

Version 7.2.0

drake version 7.2.0 is being released early in order to ensure compatibility with development testthat, re #849.

Mildly breaking changes

  • In the DSL (e.g. drake_plan(x = target(..., transform = map(...))) avoid inserting extra dots in target names when the grouping variables are character vectors (#847). Target names come out much nicer this way, but those name changes will invalidate some targets (i.e. they need to be rebuilt with make()).

Bug fixes

  • Use config$jobs_preprocess (local jobs) in several places where drake was incorrectly using config$jobs (meant for targets).
  • Allow loadd(x, deps = TRUE, config = your_config) to work even if x is not cached (#830). Required disabling tidyselect functionality when deps TRUE. There is a new note in the help file about this, and an informative console message prints out on loadd(deps = TRUE, tidyselect = TRUE). The default value of tidyselect is now !deps.
  • Minor: avoid printing messages and warnings twice to the console (#829).
  • Ensure compatibility with testthat >= 2.0.1.9000.

New features

  • In drake_plan() transformations, allow the user to refer to a target's own name using a special .id_chr symbol, which is treated like a character string.
  • Add a transparency argument to drake_ggraph() and render_drake_ggraph() to disable transparency in the rendered graph. Useful for R installations without transparency support.

Enhancements

  • Use a custom layout to improve node positions and aspect ratios of vis_drake_graph() and drake_ggraph() displays. Only activated in vis_drake_graph() when there are at least 10 nodes distributed in both the vertical and horizontal directions.
  • Allow nodes to be dragged both vertically and horizontally in vis_drake_graph() and render_drake_graph().
  • Prevent dots from showing up in target names when you supply grouping variables to transforms in drake_plan() (#847).
  • Do not keep drake plans (drake_plan()) inside drake_config() objects. When other bottlenecks are removed, this will reduce the burden on memory (re #800).
  • Do not retain the targets argument inside drake_config() objects. This is to reduce memory consumption.
  • Deprecate the layout and direction arguments of vis_drake_graph() and render_drake_graph(). Direction is now always left to right and the layout is always Sugiyama.
  • Write the cache log file in CSV format (now drake_cache.csv by default) to avoid issues with spaces (e.g. entry names with spaces in them, such as "file report.Rmd")`.

Maintenance release

07 Apr 14:06
Compare
Choose a tag to compare

Version 7.1.0

Bug fixes

  • In drake 7.0.0, if you run make() in interactive mode and respond to the menu prompt with an option other than 1 or 2, targets will still build.
  • Make sure file outputs show up in drake_graph(). The bug came from append_output_file_nodes(), a utility function of drake_graph_info().
  • Repair r_make(r_fn = callr::r_bg()) re #799.
  • Allow drake_ggraph() and sankey_drake_graph() to work when the graph has no edges.

New features

  • Add a new use_drake() function to write the make.R and _drake.R files from the main example. Does not write other supporting scripts.
  • With an optional logical hpc column in your drake_plan(), you can now select which targets to deploy to HPC and which to run locally.
  • Add a list argument to build_times(), just like loadd().
  • Add a new RStudio addin: 'loadd target at cursor' which can be bound a keyboard shortcut to load the target identified by the symbol at the cursor position to the global environment.

Enhancements

  • file_in() and file_out() can now handle entire directories, e.g. file_in("your_folder_of_input_data_files") and file_out("directory_with_a_bunch_of_output_files").
  • Send less data from config to HPC workers.
  • Improve drake_ggraph()
    • Hide node labels by default and render the arrows behind the nodes.
    • Print an informative error message when the user supplies a drake plan to the config argument of a function.
    • By default, use gray arrows and a black-and-white background with no gridlines.
  • For the map() and cross() transformations in the DSL, prevent the accidental sorting of targets by name. Needed merge(sort = FALSE) in dsl_left_outer_join().
  • Simplify verbosity. The verbose argument of make() now takes values 0, 1, and 2, and maximum verbosity in the console prints targets, retries, failures, and a spinner. The console log file, on the other hand, dumps maximally verbose runtime info regardless of the verbose argument.
  • In previous versions, functions generated with f <- Rcpp::cppFunction(...) did not stay up to date from session to session because the addresses corresponding to anonymous pointers were showing up in deparse(f). Now, drake ignores those pointers, and Rcpp functions compiled inline appear to stay up to date. This problem was more of an edge case than a bug.
  • Prepend time stamps with sub-second times to the lines of the console log file.
  • In drake_plan(), deprecate the tidy_evaluation argument in favor of the new and more concise tidy_eval. To preserve back compatibility for now, if you supply a non-NULL value to tidy_evaluation, it overwrites tidy_eval.
  • Reduce the object size of drake_config() objects by assigning closure of config$sleep to baseenv().

drake transformed

10 Mar 20:58
Compare
Choose a tag to compare

Version 7.0.0

Breaking changes

  • The enhancements that increase cache access speed also invalidate targets in old projects. Workflows built with drake <= 6.2.1 will need to run from scratch again.
  • In drake plans, the command and trigger columns are now lists of language objects instead of character vectors. make() and friends still work if you have character columns, but the default output of drake_plan() has changed to this new format.
  • All parallel backends (parallelism argument of make()) except "clustermq" and "future" are removed. A new "loop" backend covers local serial execution.
  • A large amount of deprecated functionality is now defunct, including several functions (built(), find_project(), imported(), and parallel_stages(); full list here) and the single-quoted file API.
  • Set the default value of lock_envir to TRUE in make() and drake_config(). So make() will automatically quit in error if the act of building a target tries to change upstream dependencies.
  • make() no longer returns a value. Users will need to call drake_config() separately to get the old return value of make().
  • Require the jobs argument to be of length 1 (make() and drake_config()). To parallelize the imports and other preprocessing steps, use jobs_preprocess, also of length 1.
  • Get rid of the "kernels" storr namespace. As a result, drake is faster, but users will no longer be able to load imported functions using loadd() or readd().
  • In target(), users must now explicitly name all the arguments except command, e.g. target(f(x), trigger = trigger(condition = TRUE)) instead of target(f(x), trigger(condition = TRUE)).
  • Fail right away in bind_plans() when the result has duplicated target names. This makes drake's API more predictable and helps users catch malformed workflows earlier.
  • loadd() only loads targets listed in the plan. It no longer loads imports or file hashes.
  • The return values of progress(), deps_code(), deps_target(), and predict_workers() are now data frames.
  • Change the default value of hover to FALSE in visualization functions. Improves speed.

Bug fixes

  • Allow bind_plans() to work with lists of plans (bind_plans(list(plan1, plan2)) was returning NULL in drake 6.2.0 and 6.2.1).
  • Ensure that get_cache(path = "non/default/path", search = FALSE) looks for the cache in "non/default/path" instead of getwd().
  • Remove strict dependencies on package tibble.
  • Pass the correct data structure to ensure_loaded() in meta.R and triggers.R when ensuring the dependencies of the condition and change triggers are loaded.
  • Require a config argument to drake_build() and loadd(deps = TRUE).

New features

  • Introduce a new experimental domain-specific language for generating large plans (#233). Details here.
  • Implement a lock_envir argument to safeguard reproducibility. See this thread for a demonstration of the problem solved by make(lock_envir = TRUE). More discussion: #619, #620.
  • The new from_plan() function allows the users to reference custom plan columns from within commands. Changes to values in these columns columns do not invalidate targets.
  • Add a menu prompt (#762) to safeguard against make() pitfalls in interactive mode (#761). Appears once per session. Disable with options(drake_make_menu = FALSE).
  • Add new API functions r_make(), r_outdated(), etc. to run drake functions more reproducibly in a clean session. See the help file of r_make() for details.
  • progress() gains a progress argument for filtering results. For example, progress(progress = "failed") will report targets that failed.

Enhancements

  • Large speed boost: move away from storr's key mangling in favor of drake's own encoding of file paths and namespaced functions for storr keys.
  • Exclude symbols ., .., and .gitignore from being target names (consequence of the above).
  • Use only one hash algorithm per drake cache, which the user can set with the hash_algorithm argument of new_cache(), storr::storr_rds(), and various other cache functions. Thus, the concepts of a "short hash algorithm" and "long hash algorithm" are deprecated, and the functions long_hash(), short_hash(), default_long_hash_algo(), default_short_hash_algo(), and available_hash_algos() are deprecated. Caches are still back-compatible with drake > 5.4.0 and <= 6.2.1.
  • Allow the magrittr dot symbol to appear in some commands sometimes.
  • Deprecate the fetch_cache argument in all functions.
  • Remove packages DBI and RSQLite from "Suggests".
  • Define a special config$eval <- new.env(parent = config$envir) for storing built targets and evaluating commands in the plan. Now, make() no longer modifies the user's environment. This move is a long-overdue step toward purity.
  • Remove dependency on the codetools package.
  • Deprecate and remove the session argument of make() and drake_config(). Details: #623 (comment).
  • Deprecate the graph and layout arguments to make() and drake_config(). The change simplifies the internals, and memoization allows us to do this.
  • Warn the user if running make() in a subdirectory of the drake project root (determined by the location of the .drake folder in relation to the working directory).
  • In the code analysis, explicitly prohibit targets from being dependencies of imported functions.
  • Increase options for the verbose argument, including the option to print execution and total build times.
  • Separate the building of targets from the processing of imports. Imports are processed with rudimentary staged parallelism (mclapply() or parLapply(), depending on the operating system).
  • Ignore the imports when it comes to build times. Functions build_times(), predict_runtime(), etc. focus on only the targets.
  • Deprecate many API functions, including plan_analyses(), plan_summaries(), analysis_wildcard(), cache_namespaces(), cache_path(), check_plan(), dataset_wildcard(), drake_meta(), drake_palette(), drake_tip(), recover_cache(), cleaned_namespaces(), target_namespaces(), read_drake_config(), read_drake_graph(), and read_drake_plan().
  • Deprecate target() as a user-side function. From now on, it should only be called from within drake_plan().
  • drake_envir() now throws an error, not a warning, if called in the incorrect context. Should be called only inside commands in the user's drake plan.
  • Replace *expr*() rlang functions with their *quo*() counterparts. We still keep rlang::expr() in the few places where we know the expressions need to be evaluated in config$eval.
  • The prework argument to make() and drake_config() can now be an expression (language object) or list of expressions. Character vectors are still acceptable.
  • At the end of make(), print messages about triggers etc. only if verbose >= 2L.
  • Deprecate and rename in_progress() to running().
  • Deprecate and rename knitr_deps() to deps_knitr().
  • Deprecate and rename dependency_profile() to deps_profile().
  • Deprecate and rename predict_load_balancing() to predict_workers().
  • Deprecate this_cache() and defer to get_cache() and storr::storr_rds() for simplicity.
  • Change the default value of hover to FALSE in visualization functions. Improves speed. Also a breaking change.
  • Deprecate drake_cache_log_file(). We recommend using make() with the cache_log_file argument to create the cache log. This way ensures that the log is always up to date with make() results.

CRAN hotfix

10 Dec 19:25
Compare
Choose a tag to compare

Version 6.2.1 is a hotfix to address the failing automated CRAN checks for 6.2.0. Chiefly, in CRAN's Debian R-devel (2018-12-10) check platform, errors of the form "length > 1 in coercion to logical" occurred when either argument to && or || was not of length 1 (e.g. nzchar(letters) && length(letters)). In addition to fixing these errors, version 6.2.1 also removes a problematic link from the vignette.

For more information, please see the release notes of version 6.2.0.

Faster, leaner, and compatible with tibble 2.0.0

10 Dec 14:04
Compare
Choose a tag to compare

New features

  • Add a sep argument to gather_by(), reduce_by(), reduce_plan(), evaluate_plan(), expand_plan(), plan_analyses(), and plan_summaries(). Allows the user to set the delimiter for generating new target names.
  • Expose a hasty_build argument to make() and drake_config(). Here, the user can set the function that builds targets in "hasty mode" (make(parallelism = "hasty")).
  • Add a new drake_envir() function that returns the environment where drake builds targets. Can only be accessed from inside the commands in the workflow plan data frame. The primary use case is to allow users to remove individual targets from memory at predetermined build steps.

Bug fixes

  • Ensure compatibility with tibble 2.0.0.
  • Stop returning 0s from predict_runtime(targets_only = TRUE) when some targets are outdated and others are not.
  • Remove sort(NULL) warnings from create_drake_layout(). (Affects R-3.3.x.)

Enhancements

  • Large speed boost: reduce repeated calls to parse() in code_dependencies().
  • Large speed boost: change the default value of memory_strategy (previously pruning_strategy) to "speed" (previously "lookahead").
  • Compute a special data structure in drake_config() (config$layout) just to store the code analysis results. This is an intermediate structure between the workflow plan data frame and the graph. It will help clean up the internals in future development.
  • Improve memoized preprocessing: deparse all the functions in the environment so the memoization does not react so spurious changes in R internals. Related: #345.
  • Use the label argument to future() inside make(parallelism = "future"). That way , job names are target names by default if job.name is used correctly in the batchtools template file.
  • Remove strict dependencies on packages dplyr, evaluate, fs, future, magrittr, parallel, R.utils, stats, stringi, tidyselect, and withr.
  • Remove package rprojroot from "Suggests".
  • Deprecate the force argument in all functions except make() and drake_config().
  • Change the name of prune_envir() to manage_memory().
  • Deprecate and rename the pruning_strategy argument to memory_strategy (make() and drake_config()).
  • Print warnings and messages to the console_log_file in real time (#588).
  • Use HTML line breaks in vis_drake_graph() hover text to display commands in the drake plan more elegantly.
  • Speed up predict_load_balancing() and remove its reliance on internals that will go away in 2019 via #561.
  • Remove support for the worker column of config$plan in predict_runtime() and predict_load_balancing(). This functionality will go away in 2019 via #561.
  • Change the names of the return value of predict_load_balancing() to time and workers.
  • Bring the documentation of predict_runtime() and predict_load_balancing() up to date.
  • Deprecate drake_session() and rename to drake_get_session_info().
  • Deprecate the timeout argument in the API of make() and drake_config(). A value of timeout can be still passed to these functions without error, but only the elapsed and cpu arguments impose actual timeouts now.

map_plan() and other niceties

26 Oct 12:22
Compare
Choose a tag to compare

Version 6.1.0

New features

  • Add a new map_plan() function to easily create a workflow plan data frame to execute a function call over a grid of arguments.
  • Add a new plan_to_code() function to turn drake plans into generic R scripts. New users can use this function to better understand the relationship between plans and code, and unsatisfied customers can use it to disentangle their projects from drake altogether. Similarly, plan_to_notebook() generates an R notebook from a drake plan.
  • Add a new drake_debug() function to run a target's command in debug mode. Analogous to drake_build().
  • Add a mode argument to trigger() to control how the condition trigger factors into the decision to build or skip a target. See the ?trigger for details.
  • Add a new sleep argument to make() and drake_config() to help the master process consume fewer resources during parallel processing.
  • Enable the caching argument for the "clustermq" and "clustermq_staged" parallel backends. Now, make(parallelism = "clustermq", caching = "master") will do all the caching with the master process, and make(parallelism = "clustermq", caching = "worker") will do all the caching with the workers. The same is true for parallelism = "clustermq_staged".
  • Add a new append argument to gather_plan(), gather_by(), reduce_plan(), and reduce_by(). The append argument control whether the output includes the original plan in addition to the newly generated rows.
  • Add new functions load_main_example(), clean_main_example(), and clean_mtcars_example().
  • Add a filter argument to gather_by() and reduce_by() in order to restrict what we gather even when append is TRUE.
  • Add a hasty mode: make(parallelism = "hasty") skips all of drake's expensive caching and checking. All targets run every single time and you are responsible for saving results to custom output files, but almost all the by-target overhead is gone.

Bug fixes

  • Ensure commands in the plan are re-analyzed for dependencies when new imports are added (#548). Was a bug in version 6.0.0 only.
  • Call path.expand() on the file argument to render_drake_graph() and render_sankey_drake_graph(). That way, tildes in file paths no longer interfere with the rendering of static image files. Compensates for https://github.com/wch/webshot.
  • Skip tests and examples if the required "Suggests" packages are not installed.
  • Stop checking for non-standard columns. Previously, warnings about non-standard columns were incorrectly triggered by evaluate_plan(trace = TRUE) followed by expand_plan(), gather_plan(), reduce_plan(), gather_by(), or reduce_by(). The more relaxed behavior also gives users more options about how to construct and maintain their workflow plan data frames.
  • Use checksums in "future" parallelism to make sure files travel over network file systems before proceeding to downstream targets.
  • Refactor and clean up checksum code.
  • Skip more tests and checks if visNetwork is not installed.

Enhancements

  • Stop earlier in make_targets() if all the targets are already up to date.
  • Improve the documentation of the seed argument in make() and drake_config().
  • Set the default caching argument of make() and drake_config() to "master" rather than "worker". The default option should be the lower-overhead option for small workflows. Users have the option to make a different set of tradeoffs for larger workflows.
  • Allow the condition trigger to evaluate to non-logical values as long as those values can be coerced to logicals.
  • Require that the condition trigger evaluate to a vector of length 1.
  • Keep non-standard columns in drake_plan_source().
  • make(verbose = 4) now prints to the console when a target is stored.
  • gather_by() and reduce_by() now gather/reduce everything if no columns are specified.
  • Change the default parallelization of the imports. Previously, make(jobs = 4) was equivalent to make(jobs = c(imports = 4, targets = 4)). Now, make(jobs = 4) is equivalent to make(jobs = c(imports = 1, targets = 4)). See issue 553 for details.
  • Add a console message for building the priority queue when verbose is at least 2.
  • Condense load_mtcars_example().
  • Deprecate the hook argument of make() and drake_config().
  • In gather_by() and reduce_by(), do not exclude targets with all NA gathring variables.

Major release: proper clustermq support and reduced overhead in make()

29 Sep 22:17
Compare
Choose a tag to compare

Breaking changes

For the sake of reproducibility and speed, drake version 6.0.0 is more discerning in how it detects dependencies:

  1. Targets in the plan.
  2. Functions and objects in the environment.
  3. Objects and functions from packages that are explicitly namespaced with :: and :::.

In other words, there is a clearer line between what drake detects and what it does not. And it no longer dives into packages or parent environments automatically by default. The old approach

  1. Made workflows more brittle (likely to fall out of date).
  2. Was categorically inferior to packrat in terms of package reproducibility.

Unfortunately, the change also puts old workflows out of date. Sorry for the inconvenience.

Other breaking changes that put old projects out of date:

  • Avoid serialization in digest() wherever possible. This puts old drake projects out of date, but it improves speed.
  • Require R version >= 3.3.0 rather than >= 3.2.0. Tests and checks still run fine on 3.3.0, but the required version of the stringi package no longer compiles on 3.2.0.

Bug fixes

  • In the call to unlink() in clean(), set recursive and force to FALSE. This should prevent the accidental deletion of whole directories.
  • Previously, clean() deleted input-only files if no targets from the plan were cached. A patch and a unit test are included in this release.
  • loadd(not_a_target) no longer loads every target in the cache.
  • Exclude each target from its own dependency metadata in the "deps" igraph vertex attribute (fixes #503).
  • Detect inline code dependencies in knitr_in() file code chunks.
  • Remove more calls to sort(NULL) that caused warnings in R 3.3.3.
  • Fix a bug on R 3.3.3 where analyze_loadd() was sometimes quitting with "Error: attempt to set an attribute on NULL".
  • Do not call digest::digest(file = TRUE) on directories. Instead, set hashes of directories to NA. Users should still not directories as file dependencies.
  • If files are declared as dependnecies of custom triggers ("condition" and "change") include them in vis_drake_graph(). Previously, these files were missing from the visualization, but actual workflows worked just fine. Ref: https://stackoverflow.com/questions/52121537/trigger-notification-from-report-generation-in-r-drake-package
  • Work around mysterious codetools failures in R 3.3 (add a tryCatch() statement in find_globals()).

New features

  • Add a proper clustermq-based parallel backend: make(parallelism = "clustermq").
  • evaluate_plan(trace = TRUE) now adds a *_from column to show the origins of the evaluated targets. Try evaluate_plan(drake_plan(x = rnorm(n__), y = rexp(n__)), wildcard = "n__", values = 1:2, trace = TRUE).
  • Add functions gather_by() and reduce_by(), which gather on custom columns in the plan (or columns generated by evaluate_plan(trace = TRUE)) and append the new targets to the previous plan.
  • Expose the template argument of clustermq functions (e.g. Q() and workers()) as an argument of make() and drake_config().
  • Add a new code_to_plan() function to turn R scripts and R Markdown reports into workflow plan data frames.
  • Add a new drake_plan_source() function, which generates lines of code for a drake_plan() call. This drake_plan() call produces the plan passed to drake_plan_source(). The main purpose is visual inspection (we even have syntax highlighting via prettycode) but users may also save the output to a script file for the sake of reproducibility or simple reference.
  • Deprecate deps_targets() in favor of a new deps_target() function (singular) that behaves more like deps_code().

Enhancements

  • Smooth the edges in vis_drake_graph() and render_drake_graph().
  • Make hover text slightly more readable in in vis_drake_graph() and render_drake_graph().
  • Align hover text properly in vis_drake_graph() using the "title" node column.
  • Optionally collapse nodes into clusters with vis_drake_graph(collapse = TRUE).
  • Improve dependency_profile() show major trigger hashes side-by-side
    to tell the user if the command, a dependency, an input file, or an ouptut file changed since the last make().
  • Choose more appropriate places to check that the txtq package is installed.
  • Improve the help files of loadd() and readd(), giving specific usage guidance in prose.
  • Memoize all the steps of build_drake_graph() and print to the console the ones that execute.
  • Skip some tests if txtq is not installed.

Interim development release: proper clustermq support and memoized preprocessing

24 Aug 03:26
ae914da
Compare
Choose a tag to compare
  • Add a proper clustermq-based parallel backend: make(parallelism = "clustermq").
  • Smooth the edges in vis_drake_graph() and render_drake_graph().
  • Make hover text slightly more readable in in vis_drake_graph() and render_drake_graph().
  • Align hover text properly in vis_drake_graph() using the "title" node column.
  • Optionally collapse nodes into clusters with vis_drake_graph(collapse = TRUE).
  • Improve dependency_profile() show major trigger hashes side-by-side
    to tell the user if the command, a dependency, an input file, or an ouptut file changed since the last make().
  • Choose more appropriate places to check that the txtq package is installed.
  • Expose the template argument of clustermq functions (e.g. Q() and workers()) as an argument of make() and drake_config().
  • Improve the help files of loadd() and readd(), giving specific usage guidance in prose.
  • Bugfix: loadd(not_a_target) no longer loads every target in the cache.
  • Bugfix: exclude each target from its own dependency metadata in the "deps" igraph vertex attribute (fixes #503).
  • Add a new code_to_plan() function to turn R scripts and R Markdown reports into workflow plan data frames.
  • Add a new drake_plan_source() function, which generates lines of code for a drake_plan() call. This drake_plan() call produces the plan passed to drake_plan_source(). The main purpose is visual inspection (we even have syntax highlighting via prettycode) but users may also save the output to a script file for the sake of reproducibility or simple reference.
  • Memoize all the steps of build_drake_graph() and print to the console the ones that execute.

Flexible triggers

07 Aug 12:11
Compare
Choose a tag to compare
  • Overhaul the interface for triggers and add new trigger types ("condition" and "change").
  • Offload drake's code examples to this repository and make make drake_example() and drake_examples() download examples from there.
  • Optionally show output files in graph visualizations. See the show_output_files argument to vis_drake_graph() and friends.
  • Repair output file checksum operations for distributed backends like "clustermq_staged" and "future_lapply".
  • Internally refactor the igraph attributes of the dependency graph to allow for smarter dependency/memory management during make().
  • Enable vis_drake_graph() and sankey_drake_graph() to save static image files via webshot.
  • Deprecate static_drake_graph() and render_static_drake_graph() in favor of drake_ggraph() and render_drake_ggraph().
  • Add a columns argument to evaluate_plan() so users can evaluate wildcards in columns other than the command column of plan.
  • Name the arguments of target() so users do not have to (explicitly).
  • Lay the groundwork for a special pretty print method for workflow plan data frames.