-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel simulations using MPI.jl #141
base: master
Are you sure you want to change the base?
Conversation
OUTDATED COMMENT @weymouth @b-fg @TzuYaoHuang My initial idea is to incorporate this into the main solver as an extension and use the custom type
The flow will be constructed using We could also bind the |
SOLVED
And the function
where the function
|
ext/WaterLilyWriteVTKExt.jl
Outdated
end | ||
function vtkWriter(fname="WaterLily";attrib=default_attrib(),dir="vtk_data",T=Float32) | ||
function vtkWriter(fname="WaterLily";attrib=default_attrib(),dir="vtk_data",T=Float32,extents=[(1:1,1:1)]) | ||
!isdir(dir) && mkdir(dir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can actually be an issue if all rank try to create the dir.
This pull request is a work in progress.
I open it now to see how we can add parallel capabilities to
WaterLily.jl
efficiently, keeping in mind that ultimately we want to be able to do multi-CPU/GPU simulations.I have run most of the
examples/TwoD_*
files with the double ghost again, which works in serial.This pull request changes/adds many files, and I will briefly describe the changes made.
changed files
src/WaterLily
: enables passing the type of the Poisson solver to the simulations (mainly to simplify my testing), preliminary MPI extension (not used, to be discussed).src/Flow.jl
: implement double ghost cells and remove the special QUICK/CD scheme on the boundaries, as these are no longer needed.src/MultiLevelPoisson.jl
: implement downsampling for double ghost arrays and change all utilities functions accordingly. Explicitly define the dot product functions to overload these with the MPI functions later on. Also, change thesolver!
function as thePoissonMPI.jl
test did not converge properly with theseLinfty
criteria.src/Poisson.jl
: add aperBC!
call in Jacobi (not needed, I think) and adjust the solver.src/util.jl
: adjust all theinside
functions andloc
to account for the double ghost cells. AdjustBC!
,perBC!
andexitBC!
functions for the double ghost cells. This also introduce a custom Array typeMPIArray
that allocates send and receive buffers to avoid allocating them at everympi_swap
call. This new array type allows type dispatch within the extension.New files
examples/TwoD_CircleMPI.jl
: Script to simulate the flow around a 2D circle using MPI.examples/TwoD_CircleMPIArray.jl
: the classical flow around a 2D circle but using the customMPIArray
type to demonstrate the fact that in serial (if MPI is now loaded) the flow solver works fine.ext/WaterLilyMPIExt.jl
: MPI extension that defines new functions using type dispatches for the WaterLily function that are now parallel.test/test_mpi.jl
initial MPI test should be changed to use this instead.WaterLilyMPI.jl
: contains all the function overload needed to perform parallel WaterLily simulations. Define anMPIGrid
type that stores information about the decomposition (global
for now) and thempi_swap
function that performs the message passing together with some MPI utils.MPIArray.jl
: a custom Array type that also allocates send and receive buffers to avoid allocating them at everympi_swap
call. This is an idea for the final implementation and has not been tested yet.FlowSolverMPI.jl
: tests for some critical part of the flow solver, fromsdf
measures tosim_step
. Use withvis_mpiwaterlily.jl
to see plot the results on the different ranks.PoisonMPI.jl
: parallel Poisson solver test on an analytical solution. Use withvis_mpiwaterlily.jl
to see plot the results on the different ranks.diffusion_2D_mpi.jl
: initial test of MPI function, deprecatedvis_diffusion.jl
: use to the the results ofdiffusion_2D_mpi.jl
deprecatedtest/poisson.jl
a simple Poisson test, will be removedThe things that remain to do
AllReduce
inPoisson.residuals!
)@views()
for the send-receive buffer. This could be avoided if we allocate the send and receive buffer with the arrays using something similar to what it is in the fileMPIArray.jl
VTK
extension to enable the writing of parallel files.Some of the results from
FlowSolverMPI.jl
basic rank and sdf check
zeroth kernel moment vector with and without halos
full
sim_step
check