Skip to content

Commit

Permalink
Merge pull request ESMCI#1452 from NCAR/ejh_more_docs_2
Browse files Browse the repository at this point in the history
added section to user guide about iosystems
  • Loading branch information
edhartnett authored May 26, 2019
2 parents 1465335 + 9932f6e commit 48702cd
Show file tree
Hide file tree
Showing 6 changed files with 101 additions and 41 deletions.
12 changes: 8 additions & 4 deletions doc/source/Error.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,15 @@
\page error Error Handling

By default, PIO handles errors internally by printing a string
describing the error and then calling mpi_abort. Application
describing the error and then calling mpi_abort. Application
developers can change this behaivior with a call to
\ref PIO_seterrorhandling
\ref PIO_seterrorhandling or PIOc_set_iosystem_error_handling().

\verbinclude errorhandle
The three types of error handling are:

\copydoc PIO_error_method
1 - ::PIO_INTERNAL_ERROR abort on error from any task.

2 - ::PIO_BCAST_ERROR broadcast error to all tasks on IO communicator

3 - ::PIO_RETURN_ERROR return error and do nothing else
*/
16 changes: 13 additions & 3 deletions doc/source/Installing.txt
Original file line number Diff line number Diff line change
Expand Up @@ -177,9 +177,19 @@ immediately with:

(similar to the typical `make check` Autotools target).

*ANOTHER NOTE:* These tests are designed to run in parallel.
If you are on one of the supported supercomputing platforms (i.e., NERSC, NWSC, ALCF,
etc.), then the `ctest` command will assume that the tests will be run in an appropriately configured and scheduled parallel job. This can be done by requesting an interactive session from the login nodes and then running `ctest` from within the interactive terminal. Alternatively, this can be done by running the `ctest` command from a job submission script. It is important to understand, however, that `ctest` itself will preface all of the test executable commands with the appropriate `mpirun`/`mpiexec`/`runjob`/etc. Hence, you should not further preface the `ctest` command with these MPI launchers.
*ANOTHER NOTE:* These tests are designed to run in parallel. If you
are on one of the supported supercomputing platforms (i.e., NERSC,
NWSC, ALCF, etc.), then the `ctest` command will assume that the tests
will be run in an appropriately configured and scheduled parallel job.
This can be done by requesting an interactive session from the login
nodes and then running `ctest` from within the interactive terminal.
Alternatively, this can be done by running the `ctest` command from a
job submission script. It is important to understand, however, that
`ctest` itself will preface all of the test executable commands with
the appropriate `mpirun`/`mpiexec`/`runjob`/etc. Hence, you should not
further preface the `ctest` command with these MPI launchers.

- @ref test

### Installing with CMake ###

Expand Down
2 changes: 1 addition & 1 deletion doc/source/Makefile.am
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@
EXTRA_DIST = api.txt CAMexample.txt Decomp.txt faq.txt Installing.txt \
Testing.txt base.txt c_api.txt contributing_code.txt Error.txt \
Examples.txt Introduction.txt mach_walkthrough.txt \
testpio_example.txt users_guide.txt
testpio_example.txt users_guide.txt iosystem.txt
86 changes: 54 additions & 32 deletions doc/source/Testing.txt
Original file line number Diff line number Diff line change
@@ -1,29 +1,24 @@
/******************************************************************************
*
*
*
* Copyright (C) 2009
*
* Permission to use, copy, modify, and distribute this software and its
* documentation under the terms of the GNU General Public License is hereby
* granted. No representations are made about the suitability of this software
* for any purpose. It is provided "as is" without express or implied warranty.
* See the GNU General Public License for more details.
*
* Documents produced by Doxygen are derivative works derived from the
* input used in their production; they are not affected by this license.
*
*/ /*! \page test Testing
/*! \page test Cmake Testing Information

## Building PIO2 Tests
## Building PIO Tests

To build both the Unit and Performance tests for PIO2, follow the general instructions for building PIO2 in either the [Installation](@ref install) page or the [Machine Walk-Through](@ref mach_walkthrough) page. During the Build step after (or instead of) the **make** command, type **make tests**.
To build both the Unit and Performance tests for PIO, follow the
general instructions for building PIO in either the
[Installation](@ref install) page or the [Machine Walk-Through](@ref
mach_walkthrough) page. During the Build step after (or instead of)
the **make** command, type **make tests**.

## PIO2 Unit Tests
## PIO Unit Tests

The Parallel IO library comes with more than 20 built-in unit tests to verify that the library is installed and working correctly. These tests utilize the _CMake_ and _CTest_ automation framework. Because the Parallel IO library is built for parallel applications, the unit tests should be run in a parallel environment. The simplest way to do this is to submit a PBS job to run the **ctest** command.
The Parallel IO library comes with more than 20 built-in unit tests to
verify that the library is installed and working correctly. These
tests utilize the _CMake_ and _CTest_ automation framework. Because
the Parallel IO library is built for parallel applications, the unit
tests should be run in a parallel environment. The simplest way to do
this is to submit a PBS job to run the **ctest** command.

For a library built into the example directory `/scratch/user/PIO_build/`, an example PBS script would be:
For a library built into the example directory
`/scratch/user/PIO_build/`, an example PBS script would be:

#!/bin/bash

Expand Down Expand Up @@ -101,12 +96,18 @@ On Yellowstone, the unit tests can run using the **execca** or **execgy** comman
> setenv DAV_CORES 4
> execca ctest

## PIO2 Performance Test
## PIO Performance Test

To run the performance tests, you will need to add two files to the **tests/performance** subdirectory of the PIO build directory. First, you will need a decomp file. You can download one from our google code page here:
https://svn-ccsm-piodecomps.cgd.ucar.edu/trunk/ .
You can use any of these files, and save them to your home or base work directory. Secondly, you will need to add a namelist file, named "pioperf.nl". Save this file in the directory with your **pioperf** executable (this is found in the **tests/performance** subdirectory of the PIO build directory).
To run the performance tests, you will need to add two files to the
**tests/performance** subdirectory of the PIO build directory. First,
you will need a decomp file. You can download one from our google code
page here: https://svn-ccsm-piodecomps.cgd.ucar.edu/trunk/ .

You can use any of these files, and save them to your home or base
work directory. Secondly, you will need to add a namelist file, named
"pioperf.nl". Save this file in the directory with your **pioperf**
executable (this is found in the **tests/performance** subdirectory of
the PIO build directory).

The contents of the namelist file should look like:

Expand All @@ -124,7 +125,11 @@ The contents of the namelist file should look like:

/

Here, the second line ("decompfile") points to the path for your decomp file (wherever you saved it). For the rest of the lines, each item added to the list adds another test to be run. For instance, to test all of the types of supported IO, your pio_typenames would look like:
Here, the second line ("decompfile") points to the path for your
decomp file (wherever you saved it). For the rest of the lines, each
item added to the list adds another test to be run. For instance, to
test all of the types of supported IO, your pio_typenames would look
like:

pio_typenames = 'pnetcdf','netcdf','netcdf4p','netcdf4c'

Expand All @@ -140,15 +145,20 @@ To test with both of the rearranger algorithms:

rearrangers = 1,2

(Each rearranger is a different algorithm for converting from data in memory to data in a file on disk. The first one, BOX, is the older method from PIO1, the second, SUBSET, is a newer method that seems to be more efficient in large numbers of tasks)
(Each rearranger is a different algorithm for converting from data in
memory to data in a file on disk. The first one, BOX, is the older
method from PIO1, the second, SUBSET, is a newer method that seems to
be more efficient in large numbers of tasks)

To test with different numbers of variables:

nvars = 8,5,3,2

(The more variables you use, the higher data throughput goes, usually)

To run, submit a job with 'pioperf' as the executable, and at least as many tasks as you have specified in the decomposition file. On yellowstone, a submit script could look like:
To run, submit a job with 'pioperf' as the executable, and at least as
many tasks as you have specified in the decomposition file. On
yellowstone, a submit script could look like:

#!/bin/tcsh

Expand All @@ -171,11 +181,23 @@ RESULT: write BOX 4 30 2 16.9905924688

You can decode this as:
1. Read/write describes the io operation performed

2. BOX/SUBSET is the algorithm for the rearranger (as described above)
3. 4 [1-4] is the io library used for the operation. The options here are [1] Parallel-netcdf [2] NetCDF3 [3] NetCDF4-Compressed [4] NetCDF4-Parallel
4. 30 [any number] is the number of io-specific tasks used in the operation. Must be less than the number of MPI tasks used in the test.
5. 2 [any number] is the number of variables read or written during the operation
6. 16.9905924688 [any number] is the Data Rate of the operation in MB/s. This is the important value for determining performance of the system. The higher this numbre is, the better the PIO2 library is performing for the given operation.

3. 4 [1-4] is the io library used for the operation. The options here
are [1] Parallel-netcdf [2] NetCDF3 [3] NetCDF4-Compressed [4]
NetCDF4-Parallel

4. 30 [any number] is the number of io-specific tasks used in the
operation. Must be less than the number of MPI tasks used in the test.

5. 2 [any number] is the number of variables read or written during
the operation

6. 16.9905924688 [any number] is the Data Rate of the operation in
MB/s. This is the important value for determining performance of the
system. The higher this numbre is, the better the PIO2 library is
performing for the given operation.

_Last updated: 05-17-2016_
*/
24 changes: 24 additions & 0 deletions doc/source/iosystem.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
/** @page iosystem Initializing the IO System

Using PIO begins with initializing the IO System. This sets up the MPI
communicators with the computational and I/O processors.

When the IO System is created, an IOSystem ID is returned and must be
used in future PIO calls. The IOSystem ID is returned by C functions
PIOc_Init_Intracomm() and PIOc_init_async(). Fortran users see @ref
PIO_init.

When the user program is complete, the IOSystem should be released by
calling C function PIOc_finalize() or Fortran function piolib_mod::finalize()
for each open IOSystem.

@section intercomm_mode Intercomm Mode

@image html PIO_Intercomm1.png "PIO Intercomm Mode"

@section async_mode Async Mode

@image html PIO_Intracomm1.png "PIO Async Mode"

*/

2 changes: 1 addition & 1 deletion doc/source/users_guide.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ examples on how it can be used. Please watch the PIO GitHub site
releases.

- @ref intro
- @ref iosystem
- @ref decomp
- @ref error
- @ref test
- @ref examp
- @ref faq
- @ref api
Expand Down

0 comments on commit 48702cd

Please sign in to comment.