diff --git a/docs/source/bmi.control_funcs.rst b/docs/source/bmi.control_funcs.rst index 3715ad4..c728f79 100644 --- a/docs/source/bmi.control_funcs.rst +++ b/docs/source/bmi.control_funcs.rst @@ -18,26 +18,30 @@ updating. .. code-block:: java /* SIDL */ - int parallel_initialize(in integer mpi_communicator); + int parallel_initialize(in integer comm); The `parallel_initialize` function initializes the model for running in a parallel environment. -It initializes the MPI communicator that the model should use to -communicate between all of its threads. +It sets the MPI communicator that the model should use to +communicate between all of its ranks. The `parallel_initialize` function must be called before the `initialize` function. -This communicator could be ``mpi_comm_world``, +This communicator could be ``MPI_COMM_WORLD``, but it is typically a derived communicator across a subset of the -MPI threads available for the whole simulation. +MPI ranks available for the whole simulation. **Implementation notes** * This function is only needed for MPI aware models. -* Models should be refactored, if necessary, to accept the mpi_communicator +* Models should be refactored, if necessary, to accept the MPI communicator via the model API. -* The MPI communicator is not in all environments represented by an integer. - **TODO**: check with experts. +* The MPI communicator in the Fortran ``mpi_f08`` module is type + ``MPI_Comm``. The integer value of variable ``foo`` of type ``MPI_Comm`` can + be accessed with ``foo%MPI_VAL``. This might be needed during interaction with + non-Fortran models and Fortran model using the ``mpi`` module. + + [:ref:`control_funcs` | :ref:`basic_model_interface`] @@ -73,9 +77,9 @@ formatted. a string -- a basic type in these languages. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. In C++, Java, and Python, an exception is raised on failure. -* *Parallel*: When a model runs across multiple MPI threads, the `parallel_initialize` +* *Parallel*: When a model runs across multiple MPI ranks, the `parallel_initialize` should be called first to make sure that the model can communicate with - the other MPI threads on which it runs. + the other MPI ranks on which it runs. [:ref:`control_funcs` | :ref:`basic_model_interface`] diff --git a/docs/source/bmi.getter_setter.rst b/docs/source/bmi.getter_setter.rst index 9f79603..bcd5f32 100644 --- a/docs/source/bmi.getter_setter.rst +++ b/docs/source/bmi.getter_setter.rst @@ -48,8 +48,8 @@ even if the model uses dimensional variables. variable may not be accessible after calling :ref:`finalize`. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: the number of items may vary per MPI thread, - hence the size and content of the *dest* argument will vary per MPI thread. +* *Parallel*: the number of items may vary per MPI rank, + hence the size and content of the *dest* argument will vary per MPI rank. [:ref:`getter_setter_funcs` | :ref:`basic_model_interface`] @@ -78,8 +78,8 @@ even if the model's state has changed. * In Python, a :term:`numpy` array is returned. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: the reference returned will vary per MPI thread. - It refers only to the data for the thread considered. +* *Parallel*: the reference returned will vary per MPI rank. + It refers only to the data for the rank considered. [:ref:`getter_setter_funcs` | :ref:`basic_model_interface`] @@ -106,9 +106,9 @@ Additionally, * Both *dest* and *inds* are flattened arrays. * The *inds* argument is always of type integer. -* *Parallel*: the indices are the *local* indices within the MPI thread. - The number of indices for which data is retrieved may vary per MPI thread. - The length and content of the *dest* argument will vary per MPI thread. +* *Parallel*: the indices are the *local* indices within the MPI rank. + The number of indices for which data is retrieved may vary per MPI rank. + The length and content of the *dest* argument will vary per MPI rank. [:ref:`getter_setter_funcs` | :ref:`basic_model_interface`] @@ -144,8 +144,8 @@ even if the model uses dimensional variables. variable may not be accessible after calling :ref:`finalize`. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: the number of items may vary per MPI thread, - hence the size and content of the *src* argument will vary per MPI thread. +* *Parallel*: the number of items may vary per MPI rank, + hence the size and content of the *src* argument will vary per MPI rank. [:ref:`getter_setter_funcs` | :ref:`basic_model_interface`] @@ -171,8 +171,8 @@ Additionally, * Both *src* and *inds* are flattened arrays. * The *inds* argument is always of type integer. -* *Parallel*: the indices are the *local* indices within the MPI thread. - The number of indices for which data is set may vary per MPI thread. - The length and content of the *src* argument will vary per MPI thread. +* *Parallel*: the indices are the *local* indices within the MPI rank. + The number of indices for which data is set may vary per MPI rank. + The length and content of the *src* argument will vary per MPI rank. [:ref:`getter_setter_funcs` | :ref:`basic_model_interface`] diff --git a/docs/source/bmi.grid_funcs.rst b/docs/source/bmi.grid_funcs.rst index 0410f21..e89d03d 100644 --- a/docs/source/bmi.grid_funcs.rst +++ b/docs/source/bmi.grid_funcs.rst @@ -113,7 +113,7 @@ for :ref:`unstructured ` and size is returned from the function. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: this function returns the *total number* of elements across all threads. +* *Parallel*: this function returns the *total number* of elements across all ranks. For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -320,7 +320,7 @@ See :ref:`model_grids` for more information. (nonzero) is returned. * *Parallel*: the coordinates returned only concern the index range returned by :ref:`get_grid_partition_range`. - The length and content of the *x* argument will vary per MPI thread. + The length and content of the *x* argument will vary per MPI rank. Where partitions overlap, they MUST return the same coordinate values. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -354,7 +354,7 @@ The length of the resulting one-dimensional array depends on the grid type. (nonzero) is returned. * *Parallel*: the coordinates returned only concern the index range returned by :ref:`get_grid_partition_range`. - The length and content of the *y* argument will vary per MPI thread. + The length and content of the *y* argument will vary per MPI rank. Where partitions overlap, they MUST return the same coordinate values. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -388,7 +388,7 @@ The length of the resulting one-dimensional array depends on the grid type. (nonzero) is returned. * *Parallel*: the coordinates returned only concern the index range returned by :ref:`get_grid_partition_range`. - The length and content of the *z* argument will vary per MPI thread. + The length and content of the *z* argument will vary per MPI rank. Where partitions overlap, they MUST return the same coordinate values. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -598,7 +598,7 @@ Get the total number of :term:`nodes ` in the grid. count is returned from the function. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: this function returns the *total number* of nodes across all threads. +* *Parallel*: this function returns the *total number* of nodes across all ranks. For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -649,7 +649,7 @@ Get the total number of :term:`edges ` in the grid. count is returned from the function. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: this function returns the *total number* of edges across all threads. +* *Parallel*: this function returns the *total number* of edges across all ranks. For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -700,7 +700,7 @@ Get the total number of :term:`faces ` in the grid. count is returned from the function. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: this function returns the *total number* of faces across all threads. +* *Parallel*: this function returns the *total number* of faces across all ranks. For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -756,8 +756,8 @@ node at edge head. The total length of the array is * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. * *Parallel*: this function returns the connectivity for the edges - and nodes on the current thread, hence the length and content of - *edge_nodes* varies per MPI thread. + and nodes on the current rank, hence the length and content of + *edge_nodes* varies per MPI rank. The total length of the array is 2 * :ref:`get_grid_partition_edge_count`. @@ -788,8 +788,8 @@ The length of the array returned is the sum of the values of * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. * *Parallel*: this function returns the connectivity for the faces - and edges on the current thread, hence the length and content of - *face_edges* varies per MPI thread. + and edges on the current rank, hence the length and content of + *face_edges* varies per MPI rank. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -824,8 +824,8 @@ the length of the array is the sum of the values of * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. * *Parallel*: this function returns the connectivity for the faces - and nodes on the current thread, hence the length and content of - *face_nodes* varies per MPI thread. + and nodes on the current rank, hence the length and content of + *face_nodes* varies per MPI rank. [:ref:`grid_funcs` | :ref:`basic_model_interface`] @@ -855,7 +855,7 @@ The number of edges per face is equal to the number of nodes per face. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. * *Parallel*: this function returns the number of nodes per face on the - current thread, hence the length and content of - *nodes_per_face* varies per MPI thread. + current rank, hence the length and content of + *nodes_per_face* varies per MPI rank. [:ref:`grid_funcs` | :ref:`basic_model_interface`] diff --git a/docs/source/bmi.spec.rst b/docs/source/bmi.spec.rst index 6a71863..75b3511 100644 --- a/docs/source/bmi.spec.rst +++ b/docs/source/bmi.spec.rst @@ -22,9 +22,9 @@ grouped by functional category. **Implementation notes** -* *Parallel*: All functions MUST be called on all MPI threads. - When a function returns a status code, the value returned SHOULD be the same across all MPI threads. - All other return arguments MUST be the same across all MPI threads unless explicitly stated otherwise. +* *Parallel*: All functions MUST be called on all MPI ranks. + When a function returns a status code, the value returned SHOULD be the same across all MPI ranks. + All other return arguments MUST be the same across all MPI ranks unless explicitly stated otherwise. .. table:: **Table 3:** Summary of BMI functions. :align: center diff --git a/docs/source/bmi.var_funcs.rst b/docs/source/bmi.var_funcs.rst index 731bd3f..a760429 100644 --- a/docs/source/bmi.var_funcs.rst +++ b/docs/source/bmi.var_funcs.rst @@ -150,8 +150,8 @@ a variable; i.e., the number of items multiplied by the size of each item. amount of memory used by the variable is returned from the function. * In C and Fortran, an integer status code indicating success (zero) or failure (nonzero) is returned. -* *Parallel*: the number of items may vary per MPI thread, - hence the value returned will typically vary per MPI thread. +* *Parallel*: the number of items may vary per MPI rank, + hence the value returned will typically vary per MPI rank. [:ref:`var_funcs` | :ref:`basic_model_interface`]