-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix: MPI-Distribution w/o SC #504
Conversation
b12b65c
to
f130791
Compare
- Ensure every MPI rank w/o space charge has a block, by limiting the `max_grid_size`, too. - Avoid Redistributing w/o space charge. - Expose blocking factor control to Python.
Add blocking factor for very small sims. Slightly adjust tolerance for MPI run.
gid = *it; | ||
} | ||
} | ||
auto& particle_tile = DefineAndReturnParticleTile(lid, gid, tid); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@atmyers @WeiqunZhang for ParIter
loops after initialization, with OpenMP parallelism, does it matter which tid
we use?
Or will be have now always one tile and no on-rank parallelism?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Please note we never call pc.Redistribute()
in this mode)
Without space charge, we can simply split the particles up during init in equal chunks and leave them there forever.
This fixes grid generation by setting the
max_grid_size
to the AMReXblocking_factor
(if not set by the user). Also, this avoids all calls toResizeMesh()
andRedistribute
for now.Fix #503