Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: MPI-Distribution w/o SC #504

Merged
merged 4 commits into from
Jan 28, 2024
Merged

Conversation

ax3l
Copy link
Member

@ax3l ax3l commented Jan 10, 2024

Without space charge, we can simply split the particles up during init in equal chunks and leave them there forever.

This fixes grid generation by setting the max_grid_size to the AMReX blocking_factor (if not set by the user). Also, this avoids all calls to ResizeMesh() and Redistribute for now.

Fix #503

@ax3l ax3l added bug Something isn't working bug: affects latest release Bug also exists in latest release version component: core Core ImpactX functionality labels Jan 10, 2024
src/ImpactX.cpp Outdated Show resolved Hide resolved
@ax3l ax3l added this to the Advanced Methods (SciDAC) milestone Jan 10, 2024
src/ImpactX.cpp Fixed Show fixed Hide fixed
@ax3l ax3l force-pushed the fix-mpi-no-sc branch 6 times, most recently from b12b65c to f130791 Compare January 27, 2024 23:10
@ax3l ax3l changed the title [Draft] Fix: MPI-Distribution w/o SC Fix: MPI-Distribution w/o SC Jan 27, 2024
@ax3l ax3l requested a review from atmyers January 27, 2024 23:27
- Ensure every MPI rank w/o space charge has a block, by limiting
  the `max_grid_size`, too.
- Avoid Redistributing w/o space charge.
- Expose blocking factor control to Python.
Add blocking factor for very small sims.
Slightly adjust tolerance for MPI run.
@ax3l ax3l merged commit 1d463a5 into ECP-WarpX:development Jan 28, 2024
15 checks passed
@ax3l ax3l deleted the fix-mpi-no-sc branch January 28, 2024 02:53
gid = *it;
}
}
auto& particle_tile = DefineAndReturnParticleTile(lid, gid, tid);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@atmyers @WeiqunZhang for ParIter loops after initialization, with OpenMP parallelism, does it matter which tid we use?

Or will be have now always one tile and no on-rank parallelism?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Please note we never call pc.Redistribute() in this mode)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug: affects latest release Bug also exists in latest release version bug Something isn't working component: core Core ImpactX functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

MPI Domain-Decomposition w/o Space Charge
2 participants