-
-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement MPI variants #90
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Thanks for doing this @minrk. Starting with So this may be premature, as we are still figuring this out, but it would be good to have something in the conda-forge docs about this. In the interim, maybe it could live in Dropbox Paper, HackMD, or somewhere else while we figure out what works and what doesn't. Just somewhere we can refine our understanding and use to formulate the docs on how to use this strategy. Thoughts? Preferences? |
workaround Internal Error: get_unit(): Bad internal unit KIND
@jakirkham sounds good. I'll start a doc sketching out what we know so far. FWIW, mpi-requiring packages (mumps-mpi, scalapack, petsc, etc.) are already building with mpi variants and it's working nicely. This is a slight variation because it's the first package that has a 'no mpi' variant to prefer, which is the reason for the track_features trick. |
for the same reason some serial tests are skipped
use it in `make check RUNPARALLEL=…` sets environment variables, parameters for mpich/openmpi from petsc, other recipes
test is meant to crash (that’s what it tests) but openmpi sets an exit code when this happens that the test doesn’t deal with see http://hdf-forum.184993.n3.nabble.com/HDF5-1-8-14-15-16-with-OpenMPI-1-10-1-and-Intel-16-1-td4028533.html
Linux builds succeed. I think mac will as well. The only caveat I've hit is that the mpi fortran builds seem to fail with:
mpi builds currently have fortran support disabled because of this. This same error is showing up in the conda-build 3 PRs here, so I suspect it's the same issue, possibly using the same wrong fortran compiler? I'm not sure. Googling suggests that installing the gcc package (even on Linux) would fix this, but I haven't tried. |
Having tracked down the |
so that packages can require on hdf5 with mpi or not. testing h5py built with serial hdf5, it works with parallel hdf5, but not the other way, so mpi builds have run_exports for the right build string, but serial builds do not have run_exports
breaks fortran compiler detection
shared-memory seems to have issues in ompi, at least on mac
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipe:
|
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
recipe-lint fails if mpi is undefined and apparently runs without defining mpi (this means recipe-lint ignores conda_build_config)
seems to hang. Not sure why
I've now got all combinations of gcc,toolchain,clang+mpich,openmpi,nompi building here. Writeup of mpi variants here. This recipe:
I chose that strategy for run_exports specifically because I tested h5py built against serial hdf5 run against parallel hdf5 and it worked. h5py build against parallel hdf5 did not run against serial hdf5 or other mpi. |
Based on this comment by @mcg1969 hdf5 has 3 variants:
hdf5_mpich
built with--enable-parallel
and mpichhdf5_openmpi
built with--enable-parallel
and openmpiThe use of track_features on the mpi variants results in
conda install hdf5
preferring the non-mpi variants unless explicitly requested. Downstream packages can depend directly on the feature-having variants.Alternatives include:
hdf5-parallel
packageSee #51 for more details of the pros and cons of the separate-package option.
closes #51