Skip to content

Commit

Permalink
Do not use nested dissection by default. (#550)
Browse files Browse the repository at this point in the history
* Do not use nested dissection by default.
Provide a named parameter `nested_dissection` to `symbolic()` to turn it on.

Co-authored-by: Kristoffer Carlsson <kristoffer.carlsson@chalmers.se>

* Merge Sparse Linear Algebra docs into SparseArrays so that it shows in the Julia manual
Consolidate all external packages in one place

---------

Co-authored-by: Kristoffer Carlsson <kristoffer.carlsson@chalmers.se>
  • Loading branch information
ViralBShah and KristofferC authored Aug 4, 2024
1 parent e61663a commit 1527014
Show file tree
Hide file tree
Showing 4 changed files with 87 additions and 71 deletions.
1 change: 0 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ makedocs(
sitename = "SparseArrays",
pages = Any[
"SparseArrays" => "index.md",
"Sparse Linear Algebra" => "solvers.md",
];
warnonly = [:missing_docs, :cross_references],
)
Expand Down
53 changes: 53 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,27 @@ section of the standard library reference.
| [`sprandn(m,n,d)`](@ref) | [`randn(m,n)`](@ref) | Creates a *m*-by-*n* random matrix (of density *d*) with iid non-zero elements distributed according to the standard normal (Gaussian) distribution. |
| [`sprandn(rng,m,n,d)`](@ref) | [`randn(rng,m,n)`](@ref) | Creates a *m*-by-*n* random matrix (of density *d*) with iid non-zero elements generated with the `rng` random number generator |

## [Sparse Linear Algebra](@id stdlib-sparse-linalg)

Sparse matrix solvers call functions from [SuiteSparse](http://suitesparse.com). The following factorizations are available:

| Type | Description |
|:----------------------|:--------------------------------------------- |
| `CHOLMOD.Factor` | Cholesky and LDLt factorizations |
| `UMFPACK.UmfpackLU` | LU factorization |
| `SPQR.QRSparse` | QR factorization |

These factorizations are described in more detail in the [Sparse Linear Algebra API section](@ref stdlib-sparse-linalg-api):

1. [`cholesky`](@ref SparseArrays.CHOLMOD.cholesky)
2. [`ldlt`](@ref SparseArrays.CHOLMOD.ldlt)
3. [`lu`](@ref SparseArrays.UMFPACK.lu)
4. [`qr`](@ref SparseArrays.SPQR.qr)

```@meta
DocTestSetup = nothing
```

# [SparseArrays API](@id stdlib-sparse-arrays)

```@docs
Expand Down Expand Up @@ -245,6 +266,26 @@ SparseArrays.ftranspose!
```@meta
DocTestSetup = nothing
```

# [Sparse Linear Algebra API](@id stdlib-sparse-linalg-api)

```@docs
SparseArrays.CHOLMOD.cholesky
SparseArrays.CHOLMOD.cholesky!
SparseArrays.CHOLMOD.lowrankupdate
SparseArrays.CHOLMOD.lowrankupdate!
SparseArrays.CHOLMOD.lowrankdowndate
SparseArrays.CHOLMOD.lowrankdowndate!
SparseArrays.CHOLMOD.lowrankupdowndate!
SparseArrays.CHOLMOD.ldlt
SparseArrays.UMFPACK.lu
SparseArrays.SPQR.qr
```

```@meta
DocTestSetup = nothing
```

# Noteworthy External Sparse Packages

Several other Julia packages provide sparse matrix implementations that should be mentioned:
Expand All @@ -264,3 +305,15 @@ Several other Julia packages provide sparse matrix implementations that should b
7. [ExtendableSparse.jl](https://github.com/j-fu/ExtendableSparse.jl) enables fast insertion into sparse matrices using a lazy approach to new stored indices.

8. [Finch.jl](https://github.com/willow-ahrens/Finch.jl) supports extensive multidimensional sparse array formats and operations through a mini tensor language and compiler, all in native Julia. Support for COO, CSF, CSR, CSC and more, as well as operations like broadcast, reduce, etc. and custom operations.

External packages providing sparse direct solvers:
1. [KLU.jl](https://github.com/JuliaSparse/KLU.jl)
2. [Pardiso.jl](https://github.com/JuliaSparse/Pardiso.jl/)

External packages providing solvers for iterative solution of eigensystems and singular value decompositions:
1. [ArnoldiMethods.jl](https://github.com/JuliaLinearAlgebra/ArnoldiMethod.jl)
2. [KrylovKit](https://github.com/Jutho/KrylovKit.jl)
3. [Arpack.jl](https://github.com/JuliaLinearAlgebra/Arpack.jl)

External packages for working with graphs:
1. [Graphs.jl](https://github.com/JuliaGraphs/Graphs.jl)
44 changes: 0 additions & 44 deletions docs/src/solvers.md

This file was deleted.

60 changes: 34 additions & 26 deletions src/solvers/cholmod.jl
Original file line number Diff line number Diff line change
Expand Up @@ -795,7 +795,7 @@ function ssmult(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, stype::Integer,
A, B = convert.(Sparse{promote_type(Tv1, Tv2), promote_type(Ti1, Ti2)}, (A, B))
return ssmult(A, B, stype, values, sorted)
end
function horzcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
function horzcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
{Tv1<:VRealTypes, Tv2<:VRealTypes, Ti1, Ti2}
A, B = convert.(Sparse{promote_type(Tv1, Tv2), promote_type(Ti1, Ti2)}, (A, B))
return horzcat(A, B, values)
Expand All @@ -809,7 +809,7 @@ function sdmult!(A::Sparse{Tv1, Ti}, transpose::Bool,
A, X = convert(Sparse{Tv3, Ti}, A), convert(Dense{Tv3}, X)
return sdmult!(A, transpose, α, β, X, Y)
end
function vertcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
function vertcat(A::Sparse{Tv1, Ti1}, B::Sparse{Tv2, Ti2}, values::Bool) where
{Tv1<:VRealTypes, Ti1, Tv2<:VRealTypes, Ti2}
A, B = convert.(Sparse{promote_type(Tv1, Tv2), promote_type(Ti1, Ti2)}, (A, B))
return vertcat(A, B, values)
Expand Down Expand Up @@ -895,7 +895,7 @@ function Dense(A::StridedVecOrMatInclAdjAndTrans)
return Dense{T}(A)
end
# Don't always promote to Float64 now that we have Float32 support.
Dense(A::StridedVecOrMatInclAdjAndTrans{T}) where
Dense(A::StridedVecOrMatInclAdjAndTrans{T}) where
{T<:Union{Float16, ComplexF16, Float32, ComplexF32}} = Dense{promote_type(T, Float32)}(A)


Expand Down Expand Up @@ -1055,8 +1055,8 @@ Sparse(A::Hermitian{Tv, SparseMatrixCSC{Tv,Ti}}) where {Tv, Ti} =
Sparse{promote_type(Tv, Float64), Ti <: ITypes ? Ti : promote_type(Ti, Int)}(
A.data, A.uplo == 'L' ? -1 : 1
)
Sparse(A::Hermitian{Tv, SparseMatrixCSC{Tv,Ti}}) where
{Tv<:Union{Float16, Float32, ComplexF32, ComplexF16}, Ti} =
Sparse(A::Hermitian{Tv, SparseMatrixCSC{Tv,Ti}}) where
{Tv<:Union{Float16, Float32, ComplexF32, ComplexF16}, Ti} =
Sparse{promote_type(Float32, Tv), Ti <: ITypes ? Ti : promote_type(Ti, Int)}(
A.data, A.uplo == 'L' ? -1 : 1
)
Expand All @@ -1076,7 +1076,7 @@ function Base.convert(::Type{Sparse{Tnew, Inew}}, A::Sparse{Tv, Ti}) where {Tnew
a = unsafe_load(typedpointer(A))
S = allocate_sparse(a.nrow, a.ncol, a.nzmax, Bool(a.sorted), Bool(a.packed), a.stype, Tnew, Inew)
s = unsafe_load(typedpointer(S))

ap = unsafe_wrap(Array, a.p, (a.ncol + 1,), own = false)
sp = unsafe_wrap(Array, s.p, (s.ncol + 1,), own = false)
copyto!(sp, ap)
Expand Down Expand Up @@ -1376,7 +1376,7 @@ end

## Multiplication
(*)(A::Sparse, B::Sparse) = ssmult(A, B, 0, true, true)
(*)(A::Sparse, B::Dense) = sdmult!(A, false, 1., 0., B,
(*)(A::Sparse, B::Dense) = sdmult!(A, false, 1., 0., B,
zeros(size(A, 1), size(B, 2), promote_type(eltype(A), eltype(B)))
)
(*)(A::Sparse, B::VecOrMat) = (*)(A, Dense(B))
Expand Down Expand Up @@ -1413,7 +1413,7 @@ function *(adjA::Adjoint{<:Any,<:Sparse}, B::Sparse)
end

*(adjA::Adjoint{<:Any,<:Sparse}, B::Dense) = (
A = parent(adjA); sdmult!(A, true, 1., 0., B,
A = parent(adjA); sdmult!(A, true, 1., 0., B,
zeros(size(A, 2), size(B, 2), promote_type(eltype(A), eltype(B))))
)
*(adjA::Adjoint{<:Any,<:Sparse}, B::VecOrMat) = adjA * Dense(B)
Expand All @@ -1423,25 +1423,33 @@ end

## Compute that symbolic factorization only
function symbolic(A::Sparse{<:VTypes, Ti};
perm::Union{Nothing,AbstractVector{<:Integer}}=nothing,
postorder::Bool=isnothing(perm)||isempty(perm), userperm_only::Bool=true) where Ti
perm::Union{Nothing,AbstractVector{<:Integer}}=nothing,
postorder::Bool=isnothing(perm)||isempty(perm),
userperm_only::Bool=true,
nested_dissection::Bool=false) where Ti

sA = unsafe_load(pointer(A))
sA.stype == 0 && throw(ArgumentError("sparse matrix is not symmetric/Hermitian"))

@cholmod_param postorder = postorder begin
if perm === nothing || isempty(perm) # TODO: deprecate empty perm
return analyze(A)
else # user permutation provided
if userperm_only # use perm even if it is worse than AMD
@cholmod_param nmethods = 1 begin
# The default is to just use AMD. Use nested dissection only if explicitly asked for.
# https://github.com/JuliaSparse/SparseArrays.jl/issues/548
# https://github.com/DrTimothyAldenDavis/SuiteSparse/blob/26ababc7f3af725c5fb9168a1b94850eab74b666/CHOLMOD/Include/cholmod.h#L555-L574
@cholmod_param nmethods = (nested_dissection ? 0 : 2) begin
@cholmod_param postorder = postorder begin
if perm === nothing || isempty(perm) # TODO: deprecate empty perm
return analyze(A)
else # user permutation provided
if userperm_only # use perm even if it is worse than AMD
@cholmod_param nmethods = 1 begin
return analyze_p(A, Ti[p-1 for p in perm])
end
else
return analyze_p(A, Ti[p-1 for p in perm])
end
else
return analyze_p(A, Ti[p-1 for p in perm])
end
end
end

end

function cholesky!(F::Factor{Tv}, A::Sparse{Tv};
Expand All @@ -1467,7 +1475,7 @@ See also [`cholesky`](@ref).
!!! note
This method uses the CHOLMOD library from SuiteSparse, which only supports
real or complex types in single or double precision.
real or complex types in single or double precision.
Input matrices not of those element types will
be converted to these types as appropriate.
"""
Expand Down Expand Up @@ -1587,8 +1595,8 @@ true
!!! note
This method uses the CHOLMOD[^ACM887][^DavisHager2009] library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse).
CHOLMOD only supports real or complex types in single or double precision.
Input matrices not of those element types will be
CHOLMOD only supports real or complex types in single or double precision.
Input matrices not of those element types will be
converted to these types as appropriate.
Many other functions from CHOLMOD are wrapped but not exported from the
Expand Down Expand Up @@ -1633,8 +1641,8 @@ have the type tag, it must still be symmetric or Hermitian.
See also [`ldlt`](@ref).
!!! note
This method uses the CHOLMOD library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse),
which only supports real or complex types in single or double precision.
This method uses the CHOLMOD library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse),
which only supports real or complex types in single or double precision.
Input matrices not of those element types will
be converted to these types as appropriate.
"""
Expand Down Expand Up @@ -1695,7 +1703,7 @@ it should be a permutation of `1:size(A,1)` giving the ordering to use
!!! note
This method uses the CHOLMOD[^ACM887][^DavisHager2009] library from [SuiteSparse](https://github.com/DrTimothyAldenDavis/SuiteSparse).
CHOLMOD only supports real or complex types in single or double precision.
CHOLMOD only supports real or complex types in single or double precision.
Input matrices not of those element types will
be converted to these types as appropriate.
Expand Down Expand Up @@ -1767,7 +1775,7 @@ See also [`lowrankupdate!`](@ref), [`lowrankdowndate`](@ref), [`lowrankdowndate!
"""
lowrankupdate(F::Factor{Tv}, V::AbstractArray{Tv2}) where {Tv, Tv2} =
lowrankupdate!(
change_xdtype(F, promote_type(Tv, Tv2)),
change_xdtype(F, promote_type(Tv, Tv2)),
convert(AbstractArray{promote_type(Tv, Tv2)}, V)
)

Expand All @@ -1782,7 +1790,7 @@ See also [`lowrankdowndate!`](@ref), [`lowrankupdate`](@ref), [`lowrankupdate!`]
"""
lowrankdowndate(F::Factor{Tv}, V::AbstractArray{Tv2}) where {Tv, Tv2} =
lowrankdowndate!(
change_xdtype(F, promote_type(Tv, Tv2)),
change_xdtype(F, promote_type(Tv, Tv2)),
convert(AbstractArray{promote_type(Tv, Tv2)}, V)
)

Expand Down

0 comments on commit 1527014

Please sign in to comment.