Skip to content

cea-hpc/mpc

Repository files navigation

# Getting Started guide  {#gettingstartedmpc}

Introduction
------------

This Getting Started guide details the various ways to install and configure the
MPC library (Multi-Processor Computing) version including MPI `@MPI_VERSION@`,
OpenMP `@OMP_VERSION@` (with patched GCC) and PThread support. For further
information on MPC internals, the reader may refer to
[documentation](http://mpc.paratools.com/Documentation) for more details about
the MPC framework and its execution model.

Prerequisites
-------------

The following prerequisites are required to compile MPC:

  * The main archive file from the [Downloads page](http://mpc.paratools.com/Download)
  * A **C** compiler (e.g., `gcc`)
  * An optional **Fortran** compiler if Fortran applications are to be used
    (e.g., `g77` or `gfortran`)
  * Optional *SLURM*, *HWLOC* and *OPENPA* libraries installed
  * For *Infiniband* support *libibverbs* is required

Standard Installation
---------------------

1. Unpack the main archive (tar format) and go to the top-level directory:

        $ tar xfz MPC_@MPC_VERSION@.tar.gz
        $ cd MPC_@MPC_VERSION@

2. If your tar program does not accept the ’z’ option, use the following
   commands:

        $ gunzip MPC_@MPC_VERSION@.tar.gz
        $ tar xf MPC_@MPC_VERSION@.tar
        $ cd MPC_@MPC_VERSION@

3. Choose an installation directory. Using the default `/usr/local/` will
   overwrite legacy headers. The most convenient choice is a directory shared
   among all the machines you want to use the library on. If such a directory is
   not accessible, you will have to deploy MPC on every machine after the
   installation.

        $ mkdir $HOME/mpc-install


4. Launch the installation script for MPC and its dependencies. Specifying the
   installation directory:

        $ ./installmpc --prefix=$HOME/mpc-install

  **Note**: you can use the flag `-jX` to perform a faster installation. `X`
  should be a number and is the total amount of processes launched by *Make*.


5. Source the *mpcvars* script (located at the root of the MPC installation
   directory) to update environment variables (e.g., *PATH* and
   *LD\_LIBRARY\_PATH*).

For *csh* and *tcsh*:

        $ source $(HOME)/mpc-install/mpcvars.csh

For *bash* and *sh*:

        $ source $(HOME)/mpc-install/mpcvars.sh

Check that everything went well at this point by running

        $ which mpc_cc
        $ which mpcrun

If there was no errors, it should display the path of the mpc\_cc and mpcrun
scripts.

6. To compile your first MPC program, you may execute the mpc\_cc compiler:

        $ mpc_cc hello_world.c -o hello_world

 This command uses the main default patched *GCC* to compile the code. If you
 want to use your favorite compiler instead (loosing some features like OpenMP
 support and global-variable removal), you may use the *mpc\_cflags* and
 *mpc\_ldflags* commands:

        $ ${CC} hello_world.c -o hello_world `mpc_cflags` `mpc_ldflags`

7. To execute your MPC program, use the mpcrun command:

        $ mpcrun -m=ethread      -p=1 -n=4 hello_world
        $ mpcrun -m=ethread_mxn  -p=1 -n=4 hello_world
        $ mpcrun -m=pthread      -p=1 -n=4 hello_world

Standard MPI Compilers
-----------------------

MPC does provide standard MPI compilers such as mpicc mpicxx mpif77. These compiler will
call MPC but won't privarize by default (process-mode). If you'd like to port an exisiting
build system relying on mpicc you may `export MPI_PRIV=1` which will turn on privatization
in these wrappers.

For example:

        $ MPI_PRIV=1 mpicc hello.c -o hello
        # Will be strictly equivalent to
        $ mpc_cc hello.c -o hello.c

Custom Installation
-------------------

The previous section described the default installation and configuration of
MPC, but other alternatives are available. You can find out more details on the
configuration by running: ./configure --help.

Specify the PMI based launcher to be used by MPC as follows:

* `--with-hydra``: Compile MPC with the **Hydra** launcher (embedded in the MPC
  distribution). This option is enabled by default.
* `--with-slurm[=prefix]` Compile MPC with the **SLURM** launcher.

Specify external libraries used by MPC:

* `--with-hwloc=prefix`: Specify the prefix of hwloc library
* `--with-openpa=prefix`: Specify the prefix of OpenPA library
* `--with-mpc-binutils=prefix`: Specify the prefix of the binutils

Specify external libraries for gcc compilation:

* `--with-gmp=prefix`: Specify the prefix of gmp library
* `--with-mpfr=prefix`: Specify the prefix of mpfr library
* `--with-mpc=prefix`: Specify the prefix of mpc multiprecision library

Specify external gcc and gdb:

* `--with-mpc-gcc=prefix`: Specify the prefix of the gcc used by MPC
* `--with-mpc-gdb=prefix`: Specify the prefix of the gdb used by MPC

Running MPC
-----------

The mpcrun script drives the launch of MPC programs with different types of
parallelism. Its usage is defined as follows:

Usage mpcrun [option] [--] binary [user args]

Launch Configuration:
	-N=n,--node-nb=n                Total number of nodes
	-p=n, --process-nb=n            Total number of UNIX processes
	-n=n, --task-nb=n               Total number of tasks
	-c=n, --cpu-nb=n                Number of cpus per UNIX process
	--cpu-nb-mpc=n                  Number of cpus per process for the MPC process
	--enable-smt                    Enable SMT capabilities (disabled by default)

Multithreading:
	-m=n, --multithreading=n        Define multithreading mode
                                        modes: pthread ethread_mxn

Network:
	-net=n, --net=, --network=n     Define Network mode (TCP + SHM by default)

	Configured CLI switches for network configurations:

		- ib:
			* Rail 0: shm_mpi
			* Rail 1: ib_mpi
		- opa:
			* Rail 0: shm_mpi
			* Rail 1: opa_mpi
		- tcponly:
			* Rail 0: tcp_mpi
		- tcp:
			* Rail 0: shm_mpi
			* Rail 1: tcp_mpi
		- multirail_tcp:
			* Rail 0: shm_mpi
			* Rail 1: tcp_large
		- portals:
			* Rail 0: portals_mpi
		- shm:
			* Rail 0: shm_mpi
		- default:
			* Rail 0: shm_mpi
			* Rail 1: ib_mpi

Launcher:
	-l=n, --launcher=n              Define launcher
	--opt=<options>                 Launcher specific options
	--launchlist                    Print available launch methods
	--config=<file>                 Configuration file to load.
	--profiles=<p1,p2>              List of profiles to enable in config.


Informations:
	-h, --help                      Display this help
	--show                          Display command line

	-v,-vv,-vvv,--verbose=n         Verbose mode (level 1 to 3)
	--verbose                       Verbose level 1
	--graphic-placement             Output a xml file to visualize thread
	                                placement and topology for each compute node
	--text-placement                Output a txt file to visualize thread placement and
	                                their infos on the topology for each compute node

Debugger:
	--dbg=<debugger_name> to use a debugger

Launcher options
----------------

Options passed to the launcher options should be compatible with the launch mode
chosen during *configure*. For more information you might read the
documentations of mpiexec and srun respectively for Hydra and Slurm.

* `Hydra`: If MPC is configured with Hydra, mpcrun should be used with
  `-l=mpiexec` argument. Note that this argument is used by default if not
  specified.
* `SLURM`: If MPC is configured with SLURM, mpcrun should be used with `-l=srun`
  argument.

Mono-process job
----------------

In order to run an MPC job in a single process with Hydra, you should use on of
the following methods (depending on the thread type you want to use).

        $ mpcrun -m=ethread      -n=4 hello_world
        $ mpcrun -m=ethread_mxn  -n=4 hello_world
        $ mpcrun -m=pthread      -n=4 hello_world

To use one of the above methods with SLURM, just add *-l=srun* to the command
line.

Multi-process job on a single node
----------------------------------

In order to run an MPC job with Hydra in a two-process single-node manner with
the shared memory module enabled (SHM), you should use one of the following
methods (depending on the thread type you want to use). Note that on a single
node, even if the TCP module is explicitly used, MPC automatically uses the SHM
module for all process communications.

        $ mpcrun -m=ethread      -n=4 -p=2 -net=tcp hello_world
        $ mpcrun -m=ethread_mxn  -n=4 -p=2 -net=tcp hello_world
        $ mpcrun -m=pthread      -n=4 -p=2 -net=tcp hello_world

To use one of the above methods with SLURM, just add *-l=srun* to the command
line. Of course, this mode supports both MPI and OpenMP standards, enabling the
use of hybrid programming.  There are different implementations of inter-process
communications. A call to *mpcrun -help* details all the available
implementations.

Multi-process job on multiple nodes
-----------------------------------

In order to run an MPC job on two nodes with eight processes communicating with
TCP, you should use one of the following methods (depending on the thread type
you want to use). Note that on multiple nodes, MPC automatically switches to the
MPC SHared Memory module (SHM) when a communication between processes on the
same node occurs. This behavior is available with all inter-process
communication modules (TCP included).

	$ mpcrun -m=ethread      -n=8 -p=8 -net=tcp -N=2 hello_world
	$ mpcrun -m=ethread_mxn  -n=8 -p=8 -net=tcp -N=2 hello_world
	$ mpcrun -m=pthread      -n=8 -p=8 -net=tcp -N=2 hello_world

Of course, this mode supports both MPI and OpenMP standards, enabling the use of
hybrid programming. There are different implementations of inter-process
communications and launch methods. A call to mpcrun -help detail all the
available implementations and launch methods.

Multi-process job on multiple nodes with process-based MPI flavor
-----------------------------------------------------------------

MPC offers two flavors of its MPI implementation: process-based and thread-based.
Process-based implementation is similar to the behavior of OpenMPI or MPICH.
The flavor can be directly driven by the launch arguments.
To use MPC-MPI in process-based flavor, the number of MPI processes specified
in the launch command should be the same as the number of OS processes specified
(hence -n value should be the same as -p value).
The following examples launch eight MPI processes on two nodes, in process-based
flavor. Each node will end up with four OS processes, each of these OS processes
embedding a MPI process.

	$ mpcrun -m=ethread      -n=8 -p=8 -net=tcp -N=2 -c=2 hello_world
	$ mpcrun -m=ethread_mxn  -n=8 -p=8 -net=tcp -N=2 -c=2 hello_world
	$ mpcrun -m=pthread      -n=8 -p=8 -net=tcp -N=2 -c=2 hello_world

Note that the -c argument specified the number of cores per OS process.
In the ebove examples, we can deduce that the nodes we want to work on have
at least 8 cores, with four OS processes per node, and two cores per OS
process.

Multi-process job on multiple nodes with thread-based MPI flavor
-----------------------------------------------------------------

MPC-MPI was developed to allow a thread-based flavor, e.g. inside a node,
MPI processes are threads. This flavor allows sharing constructs through
the shared memory. Communications between MPI processes on the node are just
data exchange in the shared memory, without having to go through the SHM module.
The following examples launch eight MPI processes on two nodes, in thread-based
flavor. Each node will end up with one OS process, each of these OS processes
embedding four MPI processes which are threads.

	$ mpcrun -m=ethread      -n=8 -p=2 -net=tcp -N=2 -c=8 hello_world
	$ mpcrun -m=ethread_mxn  -n=8 -p=2 -net=tcp -N=2 -c=8 hello_world
	$ mpcrun -m=pthread      -n=8 -p=2 -net=tcp -N=2 -c=8 hello_world

Since the -c argument specifies the number of cores per OS process, and not
per MPI process, the user need to give a number of cores per OS process at least
equals to the number of MPI process per OS process to avoid oversubscribing.
In the examples above, we will have four MPI process per OS process and eight
cores per OS process, hence two cores per MPI process.

The examples above display an example of full thread-based: one OS process per node,
and all MPI processes on the node are threads. It is also possible to have a
mitigated behavior: several OS processes per node, and multiple MPI processes which
are thread in each of these OS processes.
The following examples show such a behavior:

	$ mpcrun -m=ethread      -n=8 -p=4 -net=tcp -N=2 -c=4 hello_world
	$ mpcrun -m=ethread_mxn  -n=8 -p=4 -net=tcp -N=2 -c=4 hello_world
	$ mpcrun -m=pthread      -n=8 -p=4 -net=tcp -N=2 -c=4 hello_world

With these values for -N, -n, -p, each node will end up with two OS processes
and four MPI processes. It implies that each OS process embeds two MPI processes
which are threads. Then, to have two cores per MPI process, it is necessary to
ask four cores per OS process.

Launch with Hydra
-----------------

In order to execute an MPC job on multiple nodes using *Hydra*, you need to
provide the list of nodes in a *hosts* file and set the HYDRA\_HOST\_FILE
variable with the path to the file. You can also pass the host file as a
parameter of the launcher as follow:

        $ mpcrun -m=ethread -n=8 -p=8 -net=tcp -N=2 --opt='-f hosts' hello_world

See [Using the Hydra Process Manager](https://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager)
for more information about hydra hosts file.

Contacts
--------

In case of question, feature or need for assistance, please contact any of the
maintainers of the project. You can find an exhaustive list in the
[MAINTAINERS](./MAINTAINERS.md) file, summarized below :

* JAEGER Julien julien.jaeger@cea.fr
* ROUSSEL Adrien adrien.roussel@cea.fr
* TABOADA Hugo hugo.taboada@cea.fr
* ParaTools Staff info@paratools.fr