You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are some places in the code base that still use blocking MPI calls. These have been used for simplicity and/or performance, but come with some risks since the underlying pika worker thread is blocked, and may lead to deadlocks. This issue lists the remaining calls (in miniapps and the main library, not the tests) so that we are aware of what's left and if anything needs to be prioritized to be changed to asynchronous communication.
There are some places in the code base that still use blocking MPI calls. These have been used for simplicity and/or performance, but come with some risks since the underlying pika worker thread is blocked, and may lead to deadlocks. This issue lists the remaining calls (in miniapps and the main library, not the tests) so that we are aware of what's left and if anything needs to be prioritized to be changed to asynchronous communication.
max_norm
: call to reduce (DLA-Future/include/dlaf/auxiliary/norm/mc.h
Lines 91 to 92 in 0be8cc9
reduce
andbroadcast
calls in the cholesky miniappallReduceInPlace
in reduction to band algorithm (DLA-Future/include/dlaf/eigensolver/reduction_to_band/impl.h
Lines 608 to 610 in 0be8cc9
DLA-Future/include/dlaf/eigensolver/reduction_to_band/impl.h
Line 680 in 0be8cc9
MPI_Barrier
calls in miniapps (many, but not all removed in Add miniapps as tests #1112)Please edit or comment if you find calls that I've missed.
The text was updated successfully, but these errors were encountered: