-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modernize MPI interface #363
Comments
As an aside, I'm starting to work on coupling CICE6 with NEMO for our systems and the |
That would be good. Have you verified that will fix your problem in NEMO? |
Yes I did, it does. However I have another problem, should I open a new issue ? |
I just merged #365. If there are other issues, feel free to add them to this one or open a new issue, whatever you prefer. |
I'd vote for reopening since the code modernization issue was not addressed.. maybe we can talk about it at the next teleconference ? |
In #389 we switched to the Basically in |
I did a bit of research about mpi_f08 the other day and support is associated with the MPI lib that we're building with per compiler and per machine. Before we move forward with mpi_f08, we should have confidence that installed versions of MPI on just about any machine/setup the community might use will support mpi_f08. This is the kind of situation where we want to support the greatest common implementation. |
Is there more to discuss on this issue, or is it ready to be closed? @phil-blain @apcraig |
My opinion is that in a few years (maybe more), we'll eventually want to switch to the |
I think my preference would be to close this for now. When we are ready to update to mpi_f08, we can open another issue. I think if we have open issues for things we might do in several years, it's going to be difficult to sort out active issues from inactive issues. Maybe we can create a new label called "Type: Inactive" and close this. That label would allow us to search for these kinds of things in all issues later. Just an idea. |
I like that idea. There are several other issues that might get that label. |
I have added an Inactive label to both CICE and Icepack that we can use. I have attached it to this issue, but have not yet closed it. If @phil-blain agrees using this label and closing these issues for the time being is reasonable, I think we could probably take the same approach to a few other issues as well. |
I think that's a good idea. |
Just to add a quick note. We have added a section to the Software Development Practice to summarize a policy regarding this issue. https://github.com/CICE-Consortium/About-Us/wiki/Software-Development-Practices. |
I just noticed that the CICE code uses 2 (very) old ways to get MPI functionality, that is
a preprocessor 'include' macro:
(in
cicecore/cicedynB/infrastructure/comm/mpi/ice_reprosum.F90
andcicecore/cicedynB/infrastructure/comm/serial/ice_reprosum.F90
)or a Fortran 'include' statement :
(everywhere else)
The
mpif.h
header is strongly discouraged by the MPI standard and will be deprecated in the future; the more modern way is to use a Fortran 'use' statement:Reference : https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node408.htm
I think making this change would make CICE more future-proof.
The text was updated successfully, but these errors were encountered: