-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profiling studies on more platforms #1079
Comments
I forgot to emphasize: the flame graphs from |
I just did some profiling on a personal ThinkPad X1 Carbon (6th Gen) with these specs:
On Linux, results were similar to what I have been seeing on other machines. On the Windows partition of the same machine, interaction with the file system was much slower. Here are profiling results for the same workflow as before. This reminds me of #937. Some machines just have slower file systems, and that is going to matter. I think we are already saving targets as fast as possible, especially with custom data formats. But maybe there is something we can do about all the tiny file operations that do bookkeeping: metadata, progress, history, and data recovery. Perhaps those should exist outside Related: #937. |
Looking at the flame graph on Windows, it looks like we could gain some speed with a faster alternative to file.rename(). |
Prework
drake
's code of conduct.remotes::install_github("ropensci/drake")
) and mention the SHA-1 hash of the Git commit you install.Description
drake
is much faster than it used to be, but it always needs work. I have profiled a bunch on my home Linux machine and to some degree on the Mac and Linux machines I can access at work. We may unearth new bottlenecks if we run profiling studies on Windows and on rigs with slow file systems.You can help
I would really appreciate your help! The easiest way to contribute is with https://github.com/ropensci/drake/blob/master/.github/ISSUE_TEMPLATE/bottleneck.md#benchmarks, and I have an existing profiling workflow here.
static.R
is a pretty good benchmark, though if your system is not super powerful, maybe attenuateseq_len(1e4)
in the plan to something smaller. The instructions generalize well, and if you want to plug in your own plan, it would really help us cover a more diverse set of use cases.The text was updated successfully, but these errors were encountered: