Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel simulations using MPI.jl #141

Draft
wants to merge 45 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
c2b0db6
double ghost working and solver fix
Apr 23, 2024
07aca6f
working mpi Isend/Irecv! with WaterLily
Jun 9, 2024
ffad91a
add new WaterLily.BC! function
Jun 10, 2024
6d41236
update examples/Project.toml
Jun 10, 2024
c404a52
Merge branch 'weymouth:master' into MPI
marinlauber Jun 10, 2024
66d485b
add BC! function for scalar and vector fields
Jun 11, 2024
ef3316a
update examples/Project.toml
Jun 11, 2024
e168ba2
working conv_diff! but not project!
Jun 12, 2024
8994832
fix loc function for double halos
Jun 12, 2024
054711e
add MPI functions and Poisson working
Jun 24, 2024
6977c1f
add MPI as extension
Jun 24, 2024
511cc44
try and fix extension, not working
Jun 25, 2024
2d4f4bf
merge master into MPI
Jun 25, 2024
cae2d1e
type dispatch for BC! functions
Jun 25, 2024
3294511
working mom_step! with Poisson solver
marinlauber Jul 3, 2024
4d2cc50
running with MG solver, but no convegence
Jul 3, 2024
b94babe
start proper testing
marinlauber Jul 5, 2024
23a96d9
proper tesing and psolver issue
Jul 5, 2024
2945552
fix solver inside/inside_u
Jul 6, 2024
cd1948f
fixed inside function for double halos
Jul 7, 2024
312b1b8
working parallel poisson solver
Jul 11, 2024
50c8695
add MPIArray type
Jul 11, 2024
a8abd84
Merge branch 'WaterLily-jl:master' into MPI
marinlauber Jul 11, 2024
1ee9341
fix type dispatch for MPIArray
Jul 11, 2024
f09736d
fix type dispatch for MPIArray
Jul 11, 2024
4a5eb0e
AllReduce in residuals!(pois)
marinlauber Jul 11, 2024
281818b
perBC! in Jacobi
marinlauber Jul 12, 2024
7241d39
tidy solver and MPI
marinlauber Jul 12, 2024
ec72ec1
add exitBC! function
marinlauber Jul 12, 2024
7fdbdd8
MPIArray working with sims
marinlauber Jul 12, 2024
b62cde4
clean MPIArray methods
marinlauber Jul 12, 2024
13388de
add Parallel VTK function
marinlauber Jul 13, 2024
883e17b
fix pvtk export with MPI
marinlauber Jul 15, 2024
8bf893a
MPI as an extension running
Jul 16, 2024
283a46b
MPI in extension and remove old files
Jul 16, 2024
9af4ad2
move all write file to their extentions and tidy examples
marinlauber Jul 17, 2024
0bd4e68
remove unused tests
marinlauber Jul 17, 2024
cdc1ba6
move some test into proper testing file
marinlauber Jul 17, 2024
5b6de27
reduce mpi swaps in BC!
Jul 17, 2024
e84b638
fixed loc issue
Aug 5, 2024
b098083
merge master into MPI and add/updatels tests
Aug 6, 2024
f24199b
fix test VTK and MPI
Aug 6, 2024
dc2c90f
more elegant global_loc() function
Aug 11, 2024
3be8f8f
fix write vtk erro and convective exit
Aug 11, 2024
3153ee2
non-allocating halo swap for scalar fields and more tests
Aug 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions examples/TwoD_CircleMPI.jl
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
#mpiexecjl --project= -n 4 julia TwoD_CircleMPI.jl
#mpiexecjl --project=. -n 2 julia TwoD_CircleMPI.jl
marinlauber marked this conversation as resolved.
Show resolved Hide resolved

using WaterLily
using WriteVTK
using MPI
using StaticArrays
using Printf: @sprintf
# include("../WaterLilyMPI.jl") # this uses the old functions

# make a writer with some attributes, now we need to apply the BCs when writting
velocity(a::Simulation) = a.flow.u |> Array;
Expand Down
71 changes: 52 additions & 19 deletions ext/WaterLilyMPIExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -74,34 +74,67 @@ end

This function sets the boundary conditions of the array `a` using the MPI grid.
"""
# function BC!(a::MPIArray,A,saveexit=false,perdir=())
# N,n = WaterLily.size_u(a)
# for i ∈ 1:n, d ∈ 1:n
# # get data to transfer @TODO use @views
# send1 = a[buff(N,-d),i]; send2 = a[buff(N,+d),i]
# recv1 = zero(send1); recv2 = zero(send2)
# # swap
# mpi_swap!(send1,recv1,send2,recv2,neighbors(d),mpi_grid().comm)

# # this sets the BCs on the domain boundary and transfers the data
# if mpi_wall(d,1) # left wall
# if i==d # set flux
# a[halos(N,-d),i] .= A[i]
# a[WaterLily.slice(N,3,d),i] .= A[i]
# else # zero gradient
# a[halos(N,-d),i] .= reverse(send1; dims=d)
# end
# else # neighbor on the left
# a[halos(N,-d),i] .= recv1
# end
# if mpi_wall(d,2) # right wall
# if i==d && (!saveexit || i>1) # convection exit
# a[halos(N,+d),i] .= A[i]
# else # zero gradient
# a[halos(N,+d),i] .= reverse(send2; dims=d)
# end
# else # neighbor on the right
# a[halos(N,+d),i] .= recv2
# end
# end
# end
using EllipsisNotation
function BC!(a::MPIArray,A,saveexit=false,perdir=())
N,n = WaterLily.size_u(a)
for i ∈ 1:n, d ∈ 1:n
for d ∈ 1:n # transfer full halos in each direction
# get data to transfer @TODO use @views
send1 = a[buff(N,-d),i]; send2 = a[buff(N,+d),i]
send1 = a[buff(N,-d),:]; send2 = a[buff(N,+d),:]
recv1 = zero(send1); recv2 = zero(send2)
# swap
mpi_swap!(send1,recv1,send2,recv2,neighbors(d),mpi_grid().comm)

# this sets the BCs on the domain boundary and transfers the data
if mpi_wall(d,1) # left wall
if i==d # set flux
a[halos(N,-d),i] .= A[i]
a[WaterLily.slice(N,3,d),i] .= A[i]
else # zero gradient
a[halos(N,-d),i] .= reverse(send1; dims=d)
# mpi boundary swap
!mpi_wall(d,1) && (a[halos(N,-d),:] .= recv1) # halo swap
!mpi_wall(d,2) && (a[halos(N,+d),:] .= recv2) # halo swap

for i ∈ 1:n # this sets the BCs on the physical boundary
if mpi_wall(d,1) # left wall
if i==d # set flux
a[halos(N,-d),i] .= A[i]
a[WaterLily.slice(N,3,d),i] .= A[i]
else # zero gradient
a[halos(N,-d),i] .= reverse(send1[..,i]; dims=d)
end
end
else # neighbor on the left
a[halos(N,-d),i] .= recv1
end
if mpi_wall(d,2) # right wall
if i==d && (!saveexit || i>1) # convection exit
a[halos(N,+d),i] .= A[i]
else # zero gradient
a[halos(N,+d),i] .= reverse(send2; dims=d)
if mpi_wall(d,2) # right wall
if i==d && (!saveexit || i>1) # convection exit
a[halos(N,+d),i] .= A[i]
else # zero gradient
a[halos(N,+d),i] .= reverse(send2[..,i]; dims=d)
end
end
else # neighbor on the right
a[halos(N,+d),i] .= recv2
end
end
end
Expand Down