Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC: reactivate rst-backticks pre-commit hook and fix nits #95

Merged
merged 1 commit into from
Oct 14, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ repos:
hooks:
- id: blacken-docs
additional_dependencies: [black==21.8b0]
#- repo: https://github.com/pre-commit/pygrep-hooks
# rev: v1.9.0
# hooks:
# - id: rst-backticks
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.9.0
hooks:
- id: rst-backticks
18 changes: 9 additions & 9 deletions doc/source/halo_catalog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ analysis begins with a call to

hc.create()

The `save_halos` keyword determines whether the actual Halo objects
The ``save_halos`` keyword determines whether the actual Halo objects
are saved after analysis on them has completed or whether just the
contents of their quantities dicts will be retained for creating the
final catalog. The looping over halos uses a call to parallel_objects
Expand All @@ -274,7 +274,7 @@ Parallelism
Halo analysis using the
:class:`~yt_astro_analysis.halo_analysis.halo_catalog.halo_catalog.HaloCatalog`
can be parallelized by adding ``yt.enable_parallelism()`` to the top of the
script and running with `mpirun`.
script and running with ``mpirun``.

.. code-block:: python

Expand All @@ -290,17 +290,17 @@ script and running with `mpirun`.

The nature of the parallelism can be configured with two keywords provided to the
:meth:`~yt_astro_analysis.halo_analysis.halo_catalog.halo_catalog.HaloCatalog.create`
function: `njobs` and `dynamic`. If `dynamic` is set to False, halos will be
distributed evenly over all processors. If `dynamic` is set to True, halos
will be allocated to processors via a task queue. The `njobs` keyword determines
function: ``njobs`` and ``dynamic``. If ``dynamic`` is set to False, halos will be
distributed evenly over all processors. If ```dynamic`` is set to True, halos
will be allocated to processors via a task queue. The ``njobs`` keyword determines
the number of processor groups over which the analysis will be divided. The
default value for `njobs` is "auto". In this mode, a single processor will be
allocated to analyze a halo. The `dynamic` keyword is overridden to False if
default value for ``njobs`` is "auto". In this mode, a single processor will be
allocated to analyze a halo. The ``dynamic`` keyword is overridden to False if
the number of processors being used is even and True (use a task queue) if odd.
Set `njobs` to -1 to mandate a single processor to analyze a halo and to a positive
Set ``njobs`` to -1 to mandate a single processor to analyze a halo and to a positive
number to create that many processor groups for performing analysis. The number of
processors used per halo will then be the total number of processors divided by
`njobs`. For more information on running ``yt`` in parallel, see
``njobs``. For more information on running ``yt`` in parallel, see
:ref:`parallel-computation`.

Loading Created Halo Catalogs
Expand Down
14 changes: 7 additions & 7 deletions doc/source/halo_finding.rst
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ to a directory associated with the ``output_dir`` keyword provided to the
:class:`~yt_astro_analysis.halo_analysis.halo_catalog.halo_catalog.HaloCatalog`.
The number of files for each catalog is equal to the number of processors used. The
catalog files have the naming convention
`<dataset_name>/<dataset_name>.<processor_number>.h5`, where `dataset_name` refers
``<dataset_name>/<dataset_name>.<processor_number>.h5``, where ``dataset_name`` refers
to the name of the snapshot. For more information on loading these with yt, see
:ref:`halocatalog`.

Expand All @@ -153,7 +153,7 @@ The ``yt_astro_analysis`` package works with the latest version of
obtaining and installing ``rockstar-galaxies`` for use with
``yt_astro_analysis``.

To run Rockstar, your script must be run with `mpirun` using a minimum of three
To run Rockstar, your script must be run with ``mpirun`` using a minimum of three
processors. Rockstar processes are divided into three groups:

* readers: these read particle data from the snapshots. Set the number of readers
Expand All @@ -162,7 +162,7 @@ processors. Rockstar processes are divided into three groups:
Set the number of writers with the ``num_writers`` keyword argument.
* server: this process coordinates the activity of the readers and writers.
There is only one server process. The total number of processes given with
`mpirun` must be equal to the number of readers plus writers plus one
``mpirun`` must be equal to the number of readers plus writers plus one
(for the server).

.. code-block:: python
Expand Down Expand Up @@ -196,19 +196,19 @@ keyword provided to the
:class:`~yt_astro_analysis.halo_analysis.halo_catalog.halo_catalog.HaloCatalog`.
The number of files for each catalog is equal to the number of writers. The
catalog files have the naming convention
`halos_<catalog_number>.<processor_number>.bin`, where catalog number 0 is the
``halos_<catalog_number>.<processor_number>.bin``, where catalog number 0 is the
first halo catalog calculated. For more information on loading these with yt,
see :ref:`rockstar`.

Parallelism
-----------

All three halo finders can be run in parallel using `mpirun` and by adding
All three halo finders can be run in parallel using ``mpirun`` and by adding
``yt.enable_parallelism()`` to the top of the script. The computational domain
will be divided evenly among all processes (among the writers in the case of
Rockstar) with a small amount of padding to ensure halos on sub-volume
boundaries are not split. For FoF and HOP, the number of processors used only
needs to provided to `mpirun` (e.g., `mpirun -np 8` to run on 8 processors).
needs to provided to ``mpirun`` (e.g., ``mpirun -np 8`` to run on 8 processors).

.. code-block:: python

Expand Down Expand Up @@ -238,6 +238,6 @@ belonging to each halo can be saved to the catalog when using either the
:ref:`fof_finding` or :ref:`hop_finding` methods. The is enabled by default
and can be disabled by setting ``save_particles`` to ``False`` in the
``finder_kwargs`` dictionary, as described above. Rockstar will also save
halo particles to the `.bin` files. However, reading these is not currently
halo particles to the ``.bin`` files. However, reading these is not currently
supported in yt. See :ref:`halocatalog` for information on accessing halo
particles for FoF and HOP catalogs.