Add MPI communicator option to the MUMPS solver interface #790
+27
−14
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds an MPI communicator option to the MUMPS solver interface. The motivation for this came from wanting to use MUMPS with
MPI_COMM_SELF
.The project that I'm working on, internal to LLNL, runs in parallel, but runs MUMPS serially. In order to achieve this, we configure MUMPS with MPI, but set
--disable-mpiinit
as described here under "MUMPS Linear Solver".We use Spack to install our third party libraries, including Ipopt and MUMPS. This means we are not using the recommended ThirdParty-Mumps. Figured I would share for awareness (our package.py is slightly different): https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ipopt/package.py.
Before the changes in this PR, our tests were hanging when run with multiple MPI tasks.