Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAY_running_time and memory_usage #245

Open
Nastassiia opened this issue Oct 31, 2016 · 0 comments
Open

RAY_running_time and memory_usage #245

Nastassiia opened this issue Oct 31, 2016 · 0 comments

Comments

@Nastassiia
Copy link

Nastassiia commented Oct 31, 2016

We have 2x7.7 Gb paired end ~100bp reads.
With kmer=45 option RAY assembled it for ~2h10m on 16 core 32 Gb node.
Suspecting it's too fast we checked and assembled on 64core 512 Gb node. Surprisingly it was longer, ~2h20m.
Command
mpirun -n 16 ~/apps/Ray-2.3.1/Ray -k 45 -amos -p ~/fw.fq ~/rev.fq -o ...

What's interesting, Outputnumbers for assemblies are almost the same (N50, maxcontig etc).

  1. So the first question why do we have increased time for 64 cores? As I read somewhere MPI is not always more productive with large number of cores (processes) due to increased messaging between processes. Can this be an issue?

resources used.
For 64 cores:
cput=148:50:45,mem=24.7Gb, vmem=56.4Gb
For 16 cores:
cput=34:28:06, mem=14.6Gb,vmem=18.2Gb

Also scaffolding, it is the main difference in time usage.
It took 57 min for 64 and 27 min for 16 cores. (Before scaffolding, 64 core is actually a little bit faster)

Sequence loading. It took 9 min more for 64 cores.
Every rank in 64 cores loads 4 times less reads than each of 16 core. (that is obvious)
But memory used for reads for every 64 cores rank is only 2 times less (which is not so obvious). 2.Why is it so?

Again I am newbie here...to my understanding vmem -it is a memory of data exchange between hard and RAM.
So can we 'explain' 64 cores increased vmem by increased memory given for the reads at 64 core ranks? (increased in terms: we do not use 4 times less mem fo reads only 2 times less at 64 cores comparing to 16cores)

Thank you.
Anastasiia

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant