You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Finally trying out "future" parallelism. I have 3 targets each evaluate_plan'ed into 10000 pieces.
The main while loop takes a really long time to get through the first 20000 up to date targets before starting on target 20001. I believe this work is done using lightly_parallelize in other types of parallelism so it's not so bad.
Perhaps this method could trim the queue first by subsetting using the outdated function?
Of course, this is another issue that'll be easier after dealing with #440.
The text was updated successfully, but these errors were encountered:
#440 may speed this up to a degree, and there may be ways to locally parallelize parts of the master process. I will think about it. But for you, maybe a different tack is more appropriate. You have a ton of targets, but the structure of the dependency network is extremely simple. I think I could help you far more by adding a new "clustermq_staged" backend (described at mschubert/clustermq#86 (comment)) and a "future_lapply_staged" backend. I do not usually recommend staged parallelism, but neither do I believe that the perfect should be the enemy of the good.
Hmm... I do not regret #452, but after looking back at the code, I strongly believe we have the same bottleneck as #435 (which #440 will solve). See below (dependencies() calls igraph::adjacent_vertices()).
Finally trying out "future" parallelism. I have 3 targets each
evaluate_plan
'ed into 10000 pieces.The main
while
loop takes a really long time to get through the first 20000 up to date targets before starting on target 20001. I believe this work is done usinglightly_parallelize
in other types of parallelism so it's not so bad.Perhaps this method could trim the queue first by subsetting using the
outdated
function?Of course, this is another issue that'll be easier after dealing with #440.
The text was updated successfully, but these errors were encountered: