Skip to content

Commit

Permalink
docs: remove redundant old preferences
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Jul 12, 2024
1 parent c0f03a0 commit 2283abf
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 12 deletions.
12 changes: 1 addition & 11 deletions docs/src/manual/distributed_utils.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,16 +59,6 @@ opt_state = DistributedUtils.synchronize!!(backend, opt_state)
`local_rank(backend) == 0`. This ensures that only the master process logs and serializes
the model.

## [GPU-Aware MPI](@id gpu-aware-mpi)

If you are using a custom MPI build that supports CUDA or ROCM, you can use the following
preferences with [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl):

1. `LuxDistributedMPICUDAAware` - Set this to `true` if your MPI build is CUDA aware.
2. `LuxDistributedMPIROCMAware` - Set this to `true` if your MPI build is ROCM aware.

By default, both of these values are set to `false`.

## Migration Guide from `FluxMPI.jl`

Let's compare the changes we need to make wrt the
Expand Down Expand Up @@ -96,7 +86,7 @@ And that's pretty much it!
2. All of the functions now require a [communication backend](@ref communication-backends)
as input.
3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See
[GPU-aware MPI](@ref gpu-aware-mpi) for more information.
[GPU-aware MPI](@ref gpu-aware-mpi-preferences) for more information.
4. Older [`Lux.gpu`](@ref) implementations used to "just work" with `FluxMPI.jl`. We expect
[`gpu_device`](@ref) to continue working as expected, however, we recommend using
[`gpu_device`](@ref) after calling [`DistributedUtils.initialize`](@ref) to avoid any
Expand Down
7 changes: 6 additions & 1 deletion docs/src/manual/preferences.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,16 @@ exhaustive list of preferences that Lux.jl uses.
of backends for nested automatic differentiation. See the manual section on
[nested automatic differentiation](@ref nested_autodiff) for more details.

## GPU-Aware MPI Support
## [GPU-Aware MPI Support](@id gpu-aware-mpi-preferences)

If you are using a custom MPI build that supports CUDA or ROCM, you can use the following
preferences with [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl):

1. `cuda_aware_mpi` - Set this to `true` if your MPI build is CUDA aware.
2. `rocm_aware_mpi` - Set this to `true` if your MPI build is ROCM aware.

By default, both of these preferences are set to `false`.

## GPU Backend Selection

1. `gpu_backend` - Set this to bypass the automatic backend selection and use a specific
Expand Down

0 comments on commit 2283abf

Please sign in to comment.