Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forest Of Octrees #1009

Merged
merged 119 commits into from
Apr 2, 2024
Merged
Show file tree
Hide file tree
Changes from 88 commits
Commits
Show all changes
119 commits
Select commit Hold shift + click to select a range
d1fa2aa
Start on making forest of octrees
lroberts36 Feb 7, 2024
d080719
Seemingly working LogicalLocation based trees
lroberts36 Feb 8, 2024
50d1c59
start on forest refinement
lroberts36 Feb 8, 2024
9158fb2
Working three tree forest example
lroberts36 Feb 8, 2024
437d082
add mesh plotting script
lroberts36 Feb 8, 2024
c40bf29
Work toward automating edge relationship finding
lroberts36 Feb 14, 2024
6ab10ff
small
lroberts36 Feb 14, 2024
cb88d4e
Working automated neighbor finding and plotting
lroberts36 Feb 14, 2024
cbe4882
cleanup
lroberts36 Feb 14, 2024
470f80c
Add squared circle setup
lroberts36 Feb 14, 2024
aae426f
allow for unrefined trees in plots
lroberts36 Feb 14, 2024
95f4997
Add a couple power of 2 bithacks
lroberts36 Feb 21, 2024
dc612cb
More work on connecting trees
lroberts36 Feb 21, 2024
0422280
Start formalizing forest and replicating current functionality
lroberts36 Feb 21, 2024
68219cf
split into separate files and other cleanup
lroberts36 Feb 22, 2024
d777c50
further split
lroberts36 Feb 22, 2024
ee65a3e
More cleanup
lroberts36 Feb 22, 2024
9f5ada2
small
lroberts36 Feb 22, 2024
f2d990a
duplicate old tree functionality in Tree
lroberts36 Feb 22, 2024
f53e85d
Add transformation
lroberts36 Feb 22, 2024
0fcfb50
Duplicate MeshBlockTree functionality in Forest
lroberts36 Feb 22, 2024
36280be
format and lint
lroberts36 Feb 22, 2024
0ada32b
fix integer division bug
lroberts36 Feb 22, 2024
e455ab9
switch to neighbor maps and add gid map to test
lroberts36 Feb 22, 2024
e8510dc
reorg
lroberts36 Feb 26, 2024
b4279ef
format and lint
lroberts36 Feb 26, 2024
3dbbac8
fix typo
lroberts36 Feb 27, 2024
4167195
Add some simplified routines that ignore periodicity
lroberts36 Feb 27, 2024
f809217
Start comparing forest results to old connection results
lroberts36 Feb 27, 2024
40abac4
resolve and set gids
lroberts36 Feb 27, 2024
15a55ee
Make stuff work and move towards logical location including macromesh…
lroberts36 Feb 28, 2024
9da255b
add tree index to LogicalLocation and remove ForestLocation
lroberts36 Feb 28, 2024
a45d508
Make Tree RAII
lroberts36 Feb 28, 2024
791db15
small
lroberts36 Feb 28, 2024
1395f79
Add some code for finding tree neighbors
lroberts36 Feb 28, 2024
68f98bc
Get neighbor ownership to work
lroberts36 Feb 29, 2024
cdb706f
format and lint
lroberts36 Feb 29, 2024
80c8ddf
Allow for transformation between old and new LLs, bugfix
lroberts36 Feb 29, 2024
318b23c
transform LLs appropriately
lroberts36 Feb 29, 2024
7653204
Add tree index to LL label
lroberts36 Feb 29, 2024
a520301
to transformation correctly
lroberts36 Feb 29, 2024
09cf63f
correct transformations
lroberts36 Feb 29, 2024
a84aa5c
Fix ownership bug
lroberts36 Feb 29, 2024
908bb5e
remove unneeded output
lroberts36 Feb 29, 2024
119aff0
Get the block domain correctly
lroberts36 Feb 29, 2024
656390f
Get restarts to work
lroberts36 Feb 29, 2024
43e4f69
build relative orientation correctly
lroberts36 Feb 29, 2024
a7aced0
use correct rounding of integer logs
lroberts36 Feb 29, 2024
9a08c1f
format and lint
lroberts36 Feb 29, 2024
e6ca3cd
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 4, 2024
1ed0e0a
refine in regions as before
lroberts36 Mar 4, 2024
ac4d6b4
save old gids
lroberts36 Mar 4, 2024
b50485e
inherent parent gids
lroberts36 Mar 4, 2024
d4de446
small name change
lroberts36 Mar 5, 2024
5bb5f16
Include boundary conditions in forest
lroberts36 Mar 5, 2024
3bd78da
format and lint
lroberts36 Mar 5, 2024
945c094
format?
lroberts36 Mar 5, 2024
ec0f519
Fix bug
lroberts36 Mar 6, 2024
2cd822e
fix bug
lroberts36 Mar 6, 2024
d7e6b76
fix restart bug
lroberts36 Mar 6, 2024
59c2462
Remove BvalsBase from BoundarySwarms
lroberts36 Mar 6, 2024
2e32e68
Fix MPI bug
lroberts36 Mar 6, 2024
cd2f091
Make tag maps work even when no grid fields exist
lroberts36 Mar 6, 2024
39cb0d0
include buffer ids
lroberts36 Mar 6, 2024
c3948ec
remove dependence on pbvals
lroberts36 Mar 6, 2024
902c0b2
Remove Boundary* spaghetti
lroberts36 Mar 7, 2024
074c64f
cleanup
lroberts36 Mar 7, 2024
d0ad6a0
continued cleanup
lroberts36 Mar 7, 2024
66b6379
format and lint
lroberts36 Mar 7, 2024
d183bda
try
lroberts36 Mar 7, 2024
a509a80
restart works
lroberts36 Mar 7, 2024
8fd1ae8
Working...
lroberts36 Mar 7, 2024
11a80fa
remove MeshblockTree
lroberts36 Mar 7, 2024
440fbb4
format and lint
lroberts36 Mar 7, 2024
7a7c04c
rename neighbor finding
lroberts36 Mar 7, 2024
24d72d8
properly pass ranks
lroberts36 Mar 7, 2024
659382d
actually work with forest locations
lroberts36 Mar 7, 2024
140981b
fix root_level stuff
lroberts36 Mar 7, 2024
3dae6d8
format
lroberts36 Mar 7, 2024
66ad3c5
remove forest mesh example for now
lroberts36 Mar 7, 2024
36abe19
update changelog
lroberts36 Mar 7, 2024
b266c09
update doc
lroberts36 Mar 7, 2024
fd89cfc
remove dead code
lroberts36 Mar 7, 2024
067a7e8
add comments
lroberts36 Mar 11, 2024
1f7e2cd
comment
lroberts36 Mar 11, 2024
9e65714
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 12, 2024
f84ca9b
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 14, 2024
0632a24
Merge branch 'develop' into lroberts36/add-forest-of-octrees
Yurlungur Mar 15, 2024
63f39ef
remove comment
lroberts36 Mar 19, 2024
4439d08
fix autoformatting
lroberts36 Mar 19, 2024
2d983e2
Add small comment
lroberts36 Mar 19, 2024
2e3e619
move function
lroberts36 Mar 19, 2024
ad564d9
use a task name that isn't a c++ keyword
lroberts36 Mar 20, 2024
31cbeb3
add comment
lroberts36 Mar 20, 2024
aa9e0d0
start moving forest
lroberts36 Mar 20, 2024
a9a4433
split up forest source files
lroberts36 Mar 20, 2024
0b6a534
format and lint
lroberts36 Mar 20, 2024
c9a0bb8
Use new ownership function in tests
lroberts36 Mar 20, 2024
3accfa5
Start adding forest unit test
lroberts36 Mar 20, 2024
5d2d23d
Add test and switch to internal map for forest
lroberts36 Mar 20, 2024
f32d2a1
format and lint
lroberts36 Mar 20, 2024
6e0a65a
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 20, 2024
464ed89
switch to map so ordering of trees in gids is correct
lroberts36 Mar 21, 2024
2a9c587
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 21, 2024
94e6d52
move change to pre-release
lroberts36 Mar 26, 2024
e724956
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 27, 2024
a0812e6
remove merge detritus
lroberts36 Mar 27, 2024
c784f9d
Merge branch 'develop' into lroberts36/add-forest-of-octrees
lroberts36 Mar 28, 2024
37ad621
fix apparently offseting bugs for negative logical levels
lroberts36 Mar 28, 2024
87dd8c4
comment
lroberts36 Apr 1, 2024
e1395f4
add fail
lroberts36 Apr 1, 2024
8cf8662
remove dead code
lroberts36 Apr 1, 2024
5f0d31e
update copyright
lroberts36 Apr 1, 2024
fac835b
mv file
lroberts36 Apr 1, 2024
93e7347
finish mv
lroberts36 Apr 1, 2024
6fa6ac8
format
lroberts36 Apr 1, 2024
3da72af
clarify naming scheme of old tree
lroberts36 Apr 1, 2024
2fedbbd
disable gmg tests for the moment
lroberts36 Apr 2, 2024
4206737
correctly exclude gmg tests
lroberts36 Apr 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
- [[PR978]](https://github.com/parthenon-hpc-lab/parthenon/pull/978) remove erroneous sparse check

### Infrastructure (changes irrelevant to downstream codes)
- [[PR 1009]](https://github.com/parthenon-hpc-lab/parthenon/pull/1009) Move from a single octree to a forest of octrees
- [[PR 1017]](https://github.com/parthenon-hpc-lab/parthenon/pull/1017) Make regression tests more verbose on failure
- [[PR 1007]](https://github.com/parthenon-hpc-lab/parthenon/pull/1007) Split template instantiations for HDF5 Read/Write attributes to speed up compile times
- [[PR 990]](https://github.com/parthenon-hpc-lab/parthenon/pull/990) Partial refactor of HDF5 I/O code for readability/extendability
Expand Down
12 changes: 1 addition & 11 deletions doc/sphinx/src/boundary_communication.rst
Original file line number Diff line number Diff line change
Expand Up @@ -155,17 +155,7 @@ In practice, we denote each channel by a unique key
so that the ``Mesh`` can contain a map from these keys to communication
channels. Then, at each remesh, sending blocks and blocks that are
receiving from blocks on a different rank can create new communication
channels and register them in this map. *Implementation detail:* To
build these keys, we currently rely on the
``MeshBlock::std::unique_ptr<BoundaryValues> pbval`` object to get
information about the neighboring blocks and build the channels and
keys. ``BoundaryValues`` has its own communication methods defined, but
none of these are used for the sparse communication. We really only rely
on the information stored in ``BoundaryBase`` (which contains general
information about all of the neighboring blocks on the mesh), which
``BoundaryValues`` inherits from. Eventually, I think ``pbval`` should
be turned into a ``BoundaryBase`` object and ``BoundaryValues`` should
be removed from the code base.
channels and register them in this map.

MPI Communication IDs
~~~~~~~~~~~~~~~~~~~~~
Expand Down
50 changes: 0 additions & 50 deletions doc/sphinx/src/interface/boundary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,53 +12,3 @@ BoundaryCommunication

Pure abstract base class, defines interfaces for managing
``BoundaryStatus`` flags and MPI requests

BoundaryBuffer
--------------

Pure abstract base class, defines interfaces for managing MPI
send/receive and loading/storing data from communication buffers.

BoundaryVariable
----------------

**Derived from**: ``BoundaryCommunication`` and ``BoundaryBuffer``

**Contains**: ``BoundaryData`` for variable and flux correction.

**Knows about**: ``MeshBlock`` and ``Mesh``

Still abstract base class, but implements some methods for sending and
receiving buffers.

BoundaryBase
------------

**Contains**: ``NeighborIndexes`` and ``NeighborBlock`` PER NEIGHBOR,
number of neighbors, neighbor levels

**Knows about**: ``MeshBlock``

Implements ``SearchAndSetNeighbors``

BoundaryValues
--------------

**Derived from**: ``BoundaryBase`` and ``BoundaryCommunication``

**Knows about**: ``MeshBlock``, all the ``BoundaryVariable`` connected
to variables of this block

Central class to interact with individual variable boundary data. Owned
by ``MeshBlock``.

CellCenteredBoundaryVariable
----------------------------

**Derived from**: ``BoundaryVariable``

**Contains**: Shallow copies of variable data, coarse buffer, and fluxes
(owned by ``Variable``)

Owned by ``Variable``, implements loading and setting boundary data,
sending and receiving flux corrections, and more.
8 changes: 4 additions & 4 deletions example/particles/particles.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -529,8 +529,8 @@ TaskStatus StopCommunicationMesh(const BlockList_t &blocks) {
#ifdef MPI_PARALLEL
for (auto &block : blocks) {
auto swarm = block->swarm_data.Get()->Get("my_particles");
for (int n = 0; n < block->pbval->nneighbor; n++) {
NeighborBlock &nb = block->pbval->neighbor[n];
for (int n = 0; n < block->neighbors.size(); n++) {
NeighborBlock &nb = block->neighbors[n];
// TODO(BRR) May want logic like this if we have non-blocking TaskRegions
// if (nb.snb.rank != Globals::my_rank) {
// if (swarm->vbswarm->bd_var_.flag[nb.bufid] != BoundaryStatus::completed) {
Expand Down Expand Up @@ -563,8 +563,8 @@ TaskStatus StopCommunicationMesh(const BlockList_t &blocks) {
auto &pmb = block;
auto sc = pmb->swarm_data.Get();
auto swarm = sc->Get("my_particles");
for (int n = 0; n < swarm->vbswarm->bd_var_.nbmax; n++) {
auto &nb = pmb->pbval->neighbor[n];
for (int n = 0; n < pmb->neighbors.size(); n++) {
auto &nb = block->neighbors[n];
swarm->vbswarm->bd_var_.flag[nb.bufid] = BoundaryStatus::waiting;
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@ def printTree(self):


class GitHubApp:

"""
GitHubApp Class

Expand Down
11 changes: 4 additions & 7 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -108,14 +108,11 @@ add_library(parthenon
bvals/boundary_conditions_generic.hpp
bvals/boundary_conditions.cpp
bvals/boundary_conditions.hpp

bvals/bvals.cpp
bvals/bvals.hpp
bvals/bvals_base.cpp
bvals/bvals_interfaces.hpp
bvals/neighbor_block.cpp
bvals/neighbor_block.hpp
bvals/boundary_flag.cpp
bvals/bvals_var.cpp
bvals/bvals_swarm.cpp

coordinates/coordinates.hpp
coordinates/uniform_cartesian.hpp
Expand Down Expand Up @@ -161,6 +158,8 @@ add_library(parthenon

mesh/amr_loadbalance.cpp
mesh/domain.hpp
mesh/forest.cpp
mesh/forest.hpp
mesh/logical_location.cpp
mesh/logical_location.hpp
mesh/mesh_refinement.cpp
Expand All @@ -170,8 +169,6 @@ add_library(parthenon
mesh/mesh.hpp
mesh/meshblock.hpp
mesh/meshblock_pack.hpp
mesh/meshblock_tree.cpp
mesh/meshblock_tree.hpp
mesh/meshblock.cpp

outputs/ascent.cpp
Expand Down
2 changes: 1 addition & 1 deletion src/bvals/boundary_conditions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

#include "bvals/boundary_conditions.hpp"
#include "bvals/boundary_conditions_generic.hpp"
#include "bvals/bvals_interfaces.hpp"
#include "bvals/neighbor_block.hpp"
#include "defs.hpp"
#include "interface/meshblock_data.hpp"
#include "mesh/domain.hpp"
Expand Down
5 changes: 2 additions & 3 deletions src/bvals/boundary_flag.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,8 @@ std::string GetBoundaryString(BoundaryFlag input_flag) {
//! \fn CheckBoundaryFlag(BoundaryFlag block_flag, CoordinateDirection dir)
// \brief Called in each MeshBlock's BoundaryValues() constructor. Mesh() ctor only
// checks the validity of user's input mesh/ixn_bc, oxn_bc string values corresponding to
// a BoundaryFlag enumerator before passing it to a MeshBlock and then BoundaryBase
// object. However, not all BoundaryFlag enumerators can be used in all directions as a
// valid MeshBlock boundary.
// a BoundaryFlag enumerator before passing it to a MeshBlock. However, not all
// BoundaryFlag enumerators can be used in all directions as a valid MeshBlock boundary.

void CheckBoundaryFlag(BoundaryFlag block_flag, CoordinateDirection dir) {
std::stringstream msg;
Expand Down
181 changes: 110 additions & 71 deletions src/bvals/bvals.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@
// license in this material to reproduce, prepare derivative works, distribute copies to
// the public, perform publicly and display publicly, and to permit others to do so.
//========================================================================================
//! \file bvals.cpp
// \brief constructor/destructor and utility functions for BoundaryValues class

#include "bvals/bvals.hpp"

Expand Down Expand Up @@ -45,93 +43,131 @@

namespace parthenon {

// BoundaryValues constructor (the first object constructed inside the MeshBlock()
// constructor): sets functions for the appropriate boundary conditions at each of the 6
// dirs of a MeshBlock
BoundaryValues::BoundaryValues(std::weak_ptr<MeshBlock> wpmb, BoundaryFlag *input_bcs,
ParameterInput *pin)
: BoundaryBase(wpmb.lock()->pmy_mesh, wpmb.lock()->loc, wpmb.lock()->block_size,
input_bcs),
pmy_block_(wpmb) {
// Check BC functions for each of the 6 boundaries in turn ---------------------
for (int i = 0; i < 6; i++) {
switch (block_bcs[i]) {
case BoundaryFlag::reflect:
case BoundaryFlag::outflow:
apply_bndry_fn_[i] = true;
break;
default: // already initialized to false in class
break;
}
}
// Inner x1
nface_ = 2;
nedge_ = 0;
CheckBoundaryFlag(block_bcs[BoundaryFace::inner_x1], CoordinateDirection::X1DIR);
CheckBoundaryFlag(block_bcs[BoundaryFace::outer_x1], CoordinateDirection::X1DIR);

std::shared_ptr<MeshBlock> pmb = GetBlockPointer();
if (!pmb->block_size.symmetry(X2DIR)) {
nface_ = 4;
nedge_ = 4;
CheckBoundaryFlag(block_bcs[BoundaryFace::inner_x2], CoordinateDirection::X2DIR);
CheckBoundaryFlag(block_bcs[BoundaryFace::outer_x2], CoordinateDirection::X2DIR);
}
BoundarySwarm::BoundarySwarm(std::weak_ptr<MeshBlock> pmb, const std::string &label)
: bswarm_index(), pmy_block(pmb), pmy_mesh_(pmb.lock()->pmy_mesh) {
#ifdef MPI_PARALLEL
swarm_comm = pmy_mesh_->GetMPIComm(label);
#endif
InitBoundaryData(bd_var_);
}

if (!pmb->block_size.symmetry(X3DIR)) {
nface_ = 6;
nedge_ = 12;
CheckBoundaryFlag(block_bcs[BoundaryFace::inner_x3], CoordinateDirection::X3DIR);
CheckBoundaryFlag(block_bcs[BoundaryFace::outer_x3], CoordinateDirection::X3DIR);
void BoundarySwarm::InitBoundaryData(BoundaryData<> &bd) {
auto pmb = GetBlockPointer();
BufferID buffer_id(pmb->pmy_mesh->ndim, pmb->pmy_mesh->multilevel);
bd.nbmax = buffer_id.size();

for (int n = 0; n < bd.nbmax; n++) {
bd.flag[n] = BoundaryStatus::waiting;
#ifdef MPI_PARALLEL
bd.req_send[n] = MPI_REQUEST_NULL;
bd.req_recv[n] = MPI_REQUEST_NULL;
#endif
}

// prevent reallocation of contiguous memory space for each of 4x possible calls to
// std::vector<BoundaryVariable *>.push_back() in Field, PassiveScalars
bvars.reserve(3);
}

// destructor

//----------------------------------------------------------------------------------------
//! \fn void BoundaryValues::SetupPersistentMPI()
// \brief Setup persistent MPI requests to be reused throughout the entire simulation
void BoundarySwarm::SetupPersistentMPI() {
#ifdef MPI_PARALLEL
std::shared_ptr<MeshBlock> pmb = GetBlockPointer();

void BoundaryValues::SetupPersistentMPI() {
for (auto bvars_it = bvars.begin(); bvars_it != bvars.end(); ++bvars_it) {
(*bvars_it).second->SetupPersistentMPI();
// Initialize neighbor communications to other ranks
for (int n = 0; n < pmb->neighbors.size(); n++) {
NeighborBlock &nb = pmb->neighbors[n];
// Neighbor on different MPI process
if (nb.snb.rank != Globals::my_rank) {
send_tag[nb.bufid] = pmb->pmy_mesh->tag_map.GetTag(pmb.get(), nb);
recv_tag[nb.bufid] = pmb->pmy_mesh->tag_map.GetTag(pmb.get(), nb);
if (bd_var_.req_send[nb.bufid] != MPI_REQUEST_NULL) {
MPI_Request_free(&bd_var_.req_send[nb.bufid]);
}
if (bd_var_.req_recv[nb.bufid] != MPI_REQUEST_NULL) {
MPI_Request_free(&bd_var_.req_recv[nb.bufid]);
}
}
}
#endif
}

//----------------------------------------------------------------------------------------
//! \fn void BoundaryValues::StartReceiving(BoundaryCommSubset phase)
// \brief initiate MPI_Irecv()

void BoundaryValues::StartReceiving(BoundaryCommSubset phase) {
for (auto bvars_it = bvars.begin(); bvars_it != bvars.end(); ++bvars_it) {
(*bvars_it).second->StartReceiving(phase);
// Send particle buffers across meshblocks. If different MPI ranks, use MPI, if same rank,
// do a deep copy on device.
void BoundarySwarm::Send(BoundaryCommSubset phase) {
std::shared_ptr<MeshBlock> pmb = GetBlockPointer();
// Fence to make sure buffers are loaded before sending
pmb->exec_space.fence();
for (int n = 0; n < pmb->neighbors.size(); n++) {
NeighborBlock &nb = pmb->neighbors[n];
if (nb.snb.rank != Globals::my_rank) {
#ifdef MPI_PARALLEL
PARTHENON_REQUIRE(bd_var_.req_send[nb.bufid] == MPI_REQUEST_NULL,
"Trying to create a new send before previous send completes!");
PARTHENON_MPI_CHECK(MPI_Isend(bd_var_.send[nb.bufid].data(), send_size[nb.bufid],
MPI_PARTHENON_REAL, nb.snb.rank, send_tag[nb.bufid],
swarm_comm, &(bd_var_.req_send[nb.bufid])));
#endif // MPI_PARALLEL
} else {
MeshBlock &target_block = *pmy_mesh_->FindMeshBlock(nb.snb.gid);
std::shared_ptr<BoundarySwarm> ptarget_bswarm =
target_block.pbswarm->bswarms[bswarm_index];
if (send_size[nb.bufid] > 0) {
// Ensure target buffer is large enough
if (bd_var_.send[nb.bufid].extent(0) >
ptarget_bswarm->bd_var_.recv[nb.targetid].extent(0)) {
ptarget_bswarm->bd_var_.recv[nb.targetid] =
BufArray1D<Real>("Buffer", (bd_var_.send[nb.bufid].extent(0)));
}

target_block.deep_copy(ptarget_bswarm->bd_var_.recv[nb.targetid],
bd_var_.send[nb.bufid]);
ptarget_bswarm->recv_size[nb.targetid] = send_size[nb.bufid];
ptarget_bswarm->bd_var_.flag[nb.targetid] = BoundaryStatus::arrived;
} else {
ptarget_bswarm->recv_size[nb.targetid] = 0;
ptarget_bswarm->bd_var_.flag[nb.targetid] = BoundaryStatus::completed;
}
}
}
}

//----------------------------------------------------------------------------------------
//! \fn void BoundaryValues::ClearBoundary(BoundaryCommSubset phase)
// \brief clean up the boundary flags after each loop

void BoundaryValues::ClearBoundary(BoundaryCommSubset phase) {
// Note BoundaryCommSubset::mesh_init corresponds to initial exchange of conserved fluid
// variables and magentic fields
for (auto bvars_it = bvars.begin(); bvars_it != bvars.end(); ++bvars_it) {
(*bvars_it).second->ClearBoundary(phase);
void BoundarySwarm::Receive(BoundaryCommSubset phase) {
#ifdef MPI_PARALLEL
std::shared_ptr<MeshBlock> pmb = GetBlockPointer();
const int &mylevel = pmb->loc.level();
for (int n = 0; n < pmb->neighbors.size(); n++) {
NeighborBlock &nb = pmb->neighbors[n];
if (nb.snb.rank != Globals::my_rank) {
// Check to see if we got a message
int test;
MPI_Status status;

if (bd_var_.flag[nb.bufid] != BoundaryStatus::completed) {
PARTHENON_MPI_CHECK(
MPI_Iprobe(nb.snb.rank, recv_tag[nb.bufid], swarm_comm, &test, &status));
if (!static_cast<bool>(test)) {
bd_var_.flag[nb.bufid] = BoundaryStatus::waiting;
} else {
bd_var_.flag[nb.bufid] = BoundaryStatus::arrived;

// If message is available, receive it
PARTHENON_MPI_CHECK(
MPI_Get_count(&status, MPI_PARTHENON_REAL, &(recv_size[nb.bufid])));
if (recv_size[nb.bufid] > bd_var_.recv[nb.bufid].extent(0)) {
bd_var_.recv[nb.bufid] = BufArray1D<Real>("Buffer", recv_size[nb.bufid]);
}
PARTHENON_MPI_CHECK(MPI_Recv(bd_var_.recv[nb.bufid].data(), recv_size[nb.bufid],
MPI_PARTHENON_REAL, nb.snb.rank,
recv_tag[nb.bufid], swarm_comm, &status));
}
}
}
}
#endif
}

// BoundarySwarms constructor (the first object constructed inside the MeshBlock()
// constructor): sets functions for the appropriate boundary conditions at each of the 6
// dirs of a MeshBlock
BoundarySwarms::BoundarySwarms(std::weak_ptr<MeshBlock> wpmb, BoundaryFlag *input_bcs,
ParameterInput *pin)
: BoundaryBase(wpmb.lock()->pmy_mesh, wpmb.lock()->loc, wpmb.lock()->block_size,
input_bcs),
pmy_block_(wpmb) {
: pmy_block_(wpmb) {
// Check BC functions for each of the 6 boundaries in turn ---------------------
// TODO(BRR) Add physical particle boundary conditions, maybe using the below code
/*for (int i = 0; i < 6; i++) {
Expand All @@ -144,6 +180,9 @@ BoundarySwarms::BoundarySwarms(std::weak_ptr<MeshBlock> wpmb, BoundaryFlag *inpu
break;
}
}*/
for (int i = 0; i < 6; ++i)
block_bcs[i] = input_bcs[i];

// Inner x1
nface_ = 2;
nedge_ = 0;
Expand All @@ -167,7 +206,7 @@ BoundarySwarms::BoundarySwarms(std::weak_ptr<MeshBlock> wpmb, BoundaryFlag *inpu
}

//----------------------------------------------------------------------------------------
//! \fn void BoundaryValues::SetupPersistentMPI()
//! \fn void BoundarySwarms::SetupPersistentMPI()
// \brief Setup persistent MPI requests to be reused throughout the entire simulation

void BoundarySwarms::SetupPersistentMPI() {
Expand Down
Loading
Loading