Skip to content

Commit

Permalink
Drop deprecated things
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Dec 6, 2023
1 parent 0571034 commit 9a03a0a
Show file tree
Hide file tree
Showing 30 changed files with 217 additions and 240 deletions.
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "NonlinearSolve"
uuid = "8913a72c-1f9b-4ce2-8d82-65094dcecaec"
authors = ["SciML"]
version = "2.10.0"
version = "3.0.0"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand Down Expand Up @@ -72,7 +72,7 @@ Reexport = "0.2, 1"
SafeTestsets = "0.1"
SciMLBase = "2.9"
SciMLOperators = "0.3"
SimpleNonlinearSolve = "0.1.23"
SimpleNonlinearSolve = "1"
SparseArrays = "<0.0.1, 1"
SparseDiffTools = "2.14"
StaticArrays = "1"
Expand Down
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ NonlinearSolve = "1, 2"
NonlinearSolveMINPACK = "0.1"
SciMLBase = "2.4"
SciMLNLSolve = "0.1"
SimpleNonlinearSolve = "0.1.5"
SimpleNonlinearSolve = "1"
StaticArrays = "1"
SteadyStateDiffEq = "1.10, 2"
Sundials = "4.11"
Expand Down
1 change: 1 addition & 0 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,5 @@ pages = ["index.md",
"api/nlsolve.md",
"api/sundials.md",
"api/steadystatediffeq.md"],
"Release Notes" => "release_notes.md",
]
4 changes: 2 additions & 2 deletions docs/src/api/nonlinearsolve.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ NewtonRaphson
TrustRegion
PseudoTransient
DFSane
GeneralBroyden
GeneralKlement
Broyden
Klement
```

## Polyalgorithms
Expand Down
2 changes: 1 addition & 1 deletion docs/src/api/simplenonlinearsolve.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Brent

### General Methods

These methods are suited for any general nonlinear root-finding problem , i.e. `NonlinearProblem`.
These methods are suited for any general nonlinear root-finding problem, i.e. `NonlinearProblem`.

```@docs
SimpleNewtonRaphson
Expand Down
1 change: 1 addition & 0 deletions docs/src/api/steadystatediffeq.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,5 @@ These methods can be used independently of the rest of NonlinearSolve.jl

```@docs
DynamicSS
SSRootfind
```
11 changes: 0 additions & 11 deletions docs/src/basics/TerminationCondition.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,3 @@ DiffEqBase.NonlinearSafeTerminationReturnCode.Failure
DiffEqBase.NonlinearSafeTerminationReturnCode.PatienceTermination
DiffEqBase.NonlinearSafeTerminationReturnCode.ProtectiveTermination
```

## [Deprecated] Termination Condition API

!!! warning

This is deprecated. Currently only parts of `SimpleNonlinearSolve` uses this API. That
will also be phased out soon!

```@docs
NLSolveTerminationCondition
```
28 changes: 28 additions & 0 deletions docs/src/basics/solve.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,31 @@
```@docs
solve(prob::SciMLBase.NonlinearProblem, args...; kwargs...)
```

## General Controls

- `alias_u0::Bool`: Whether to alias the initial condition or use a copy.
Defaults to `false`.
- `internal_norm::Function`: The norm used by the solver. Default depends on algorithm
choice.

## Iteration Controls

- `maxiters::Int`: The maximum number of iterations to perform. Defaults to `1000`.
- `abstol::Number`: The absolute tolerance.
- `reltol::Number`: The relative tolerance.
- `termination_condition`: Termination Condition from DiffEqBase. Defaults to
`AbsSafeBestTerminationMode()` for `NonlinearSolve.jl` and `AbsTerminateMode()` for
`SimpleNonlinearSolve.jl`.

## Tracing Controls

These are exclusively available for native `NonlinearSolve.jl` solvers.

- `show_trace`: Must be `Val(true)` or `Val(false)`. This controls whether the trace is
displayed to the console. (Defaults to `Val(false)`)
- `trace_level`: Needs to be one of Trace Objects: [`TraceMinimal`](@ref),
[`TraceWithJacobianConditionNumber`](@ref), or [`TraceAll`](@ref). This controls the
level of detail of the trace. (Defaults to `TraceMinimal()`)
- `store_trace`: Must be `Val(true)` or `Val(false)`. This controls whether the trace is
stored in the solution object. (Defaults to `Val(false)`)
24 changes: 11 additions & 13 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,19 +27,17 @@ Pkg.add("NonlinearSolve")

## Contributing

- Please refer to the
[SciML ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://github.com/SciML/ColPrac/blob/master/README.md)
for guidance on PRs, issues, and other matters relating to contributing to SciML.

- See the [SciML Style Guide](https://github.com/SciML/SciMLStyle) for common coding practices and other style decisions.
- There are a few community forums:

+ The #diffeq-bridged and #sciml-bridged channels in the
[Julia Slack](https://julialang.org/slack/)
+ The #diffeq-bridged and #sciml-bridged channels in the
[Julia Zulip](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
+ On the [Julia Discourse forums](https://discourse.julialang.org)
+ See also [SciML Community page](https://sciml.ai/community/)
- Please refer to the
[SciML ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://github.com/SciML/ColPrac/blob/master/README.md)
for guidance on PRs, issues, and other matters relating to contributing to SciML.

- See the [SciML Style Guide](https://github.com/SciML/SciMLStyle) for common coding practices and other style decisions.
- There are a few community forums:

- The #diffeq-bridged and #sciml-bridged channels in the [Julia Slack](https://julialang.org/slack/)
- The #diffeq-bridged and #sciml-bridged channels in the [Julia Zulip](https://julialang.zulipchat.com/#narrow/stream/279055-sciml-bridged)
- On the [Julia Discourse forums](https://discourse.julialang.org)
- See also [SciML Community page](https://sciml.ai/community/)

## Reproducibility

Expand Down
9 changes: 9 additions & 0 deletions docs/src/release_notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Release Notes

## Breaking Changes in NonlinearSolve.jl v3

1. `GeneralBroyden` and `GeneralKlement` have been renamed to `Broyden` and `Klement`
respectively.
2. Compat for `SimpleNonlinearSolve` has been bumped to `v1`.
3. The old API to specify autodiff via `Val` and chunksize (that was deprecated in v2) has
been removed.
16 changes: 9 additions & 7 deletions docs/src/solvers/BracketingSolvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@

`solve(prob::IntervalNonlinearProblem,alg;kwargs)`

Solves for ``f(t)=0`` in the problem defined by `prob` using the algorithm
`alg`. If no algorithm is given, a default algorithm will be chosen.
Solves for ``f(t) = 0`` in the problem defined by `prob` using the algorithm `alg`. If no
algorithm is given, a default algorithm will be chosen.

## Recommended Methods

`ITP()` is the recommended method for the scalar interval root-finding problems. It is particularly well-suited for cases where the function is smooth and well-behaved; and achieved superlinear convergence while retaining the optimal worst-case performance of the Bisection method. For more details, consult the detailed solver API docs.

`Ridder` is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.

`Brent` is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.

## Full List of Methods
Expand All @@ -18,8 +20,8 @@ Solves for ``f(t)=0`` in the problem defined by `prob` using the algorithm
These methods are automatically included as part of NonlinearSolve.jl. Though, one can use
SimpleNonlinearSolve.jl directly to decrease the dependencies and improve load time.

- `ITP`: A non-allocating ITP (Interpolate, Truncate & Project) method
- `Falsi`: A non-allocating regula falsi method
- `Bisection`: A common bisection method
- `Ridder`: A non-allocating Ridder method
- `Brent`: A non-allocating Brent method
- `ITP`: A non-allocating ITP (Interpolate, Truncate & Project) method
- `Falsi`: A non-allocating regula falsi method
- `Bisection`: A common bisection method
- `Ridder`: A non-allocating Ridder method
- `Brent`: A non-allocating Brent method
20 changes: 10 additions & 10 deletions docs/src/solvers/NonlinearLeastSquaresSolvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,13 @@ algorithm (`LevenbergMarquardt`).

## Full List of Methods

- `LevenbergMarquardt()`: An advanced Levenberg-Marquardt implementation with the
improvements suggested in the [paper](https://arxiv.org/abs/1201.5885) "Improvements to
the Levenberg-Marquardt algorithm for nonlinear least-squares minimization". Designed for
large-scale and numerically-difficult nonlinear systems.
- `GaussNewton()`: An advanced GaussNewton implementation with support for efficient
handling of sparse matrices via colored automatic differentiation and preconditioned
linear solvers. Designed for large-scale and numerically-difficult nonlinear least squares
problems.
- `SimpleNewtonRaphson()`: Simple Gauss Newton Implementation with `QRFactorization` to
solve a linear least squares problem at each step!
- `LevenbergMarquardt()`: An advanced Levenberg-Marquardt implementation with the
improvements suggested in the [paper](https://arxiv.org/abs/1201.5885) "Improvements to
the Levenberg-Marquardt algorithm for nonlinear least-squares minimization". Designed for
large-scale and numerically-difficult nonlinear systems.
- `GaussNewton()`: An advanced GaussNewton implementation with support for efficient
handling of sparse matrices via colored automatic differentiation and preconditioned
linear solvers. Designed for large-scale and numerically-difficult nonlinear least squares
problems.
- `SimpleGaussNewton()`: Simple Gauss Newton Implementation with `QRFactorization` to
solve a linear least squares problem at each step!
111 changes: 56 additions & 55 deletions docs/src/solvers/NonlinearSystemSolvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,56 +47,56 @@ linear solver, automatic differentiation, abstract array types, GPU,
sparse/structured matrix support, etc. These methods support the largest set of types and
features, but have a bit of overhead on very small problems.

- `NewtonRaphson()`:A Newton-Raphson method with swappable nonlinear solvers and autodiff
methods for high performance on large and sparse systems.
- `TrustRegion()`: A Newton Trust Region dogleg method with swappable nonlinear solvers and
autodiff methods for high performance on large and sparse systems.
- `LevenbergMarquardt()`: An advanced Levenberg-Marquardt implementation with the
improvements suggested in the [paper](https://arxiv.org/abs/1201.5885) "Improvements to
the Levenberg-Marquardt algorithm for nonlinear least-squares minimization". Designed for
large-scale and numerically-difficult nonlinear systems.
- `PseudoTransient()`: A pseudo-transient method which mixes the stability of Euler-type
stepping with the convergence speed of a Newton method. Good for highly unstable
systems.
- `RobustMultiNewton()`: A polyalgorithm that mixes highly robust methods (line searches and
trust regions) in order to be as robust as possible for difficult problems. If this method
fails to converge, then one can be pretty certain that most (all?) other choices would
likely fail.
- `FastShortcutNonlinearPolyalg()`: The default method. A polyalgorithm that mixes fast methods
with fallbacks to robust methods to allow for solving easy problems quickly without sacrificing
robustness on the hard problems.
- `GeneralBroyden()`: Generalization of Broyden's Quasi-Newton Method with Line Search and
Automatic Jacobian Resetting. This is a fast method but unstable when the condition number of
the Jacobian matrix is sufficiently large.
- `GeneralKlement()`: Generalization of Klement's Quasi-Newton Method with Line Search and
Automatic Jacobian Resetting. This is a fast method but unstable when the condition number of
the Jacobian matrix is sufficiently large.
- `LimitedMemoryBroyden()`: An advanced version of `LBroyden` which uses a limited memory
Broyden method. This is a fast method but unstable when the condition number of
the Jacobian matrix is sufficiently large. It is recommended to use `GeneralBroyden` or
`GeneralKlement` instead unless the memory usage is a concern.
- `NewtonRaphson()`:A Newton-Raphson method with swappable nonlinear solvers and autodiff
methods for high performance on large and sparse systems.
- `TrustRegion()`: A Newton Trust Region dogleg method with swappable nonlinear solvers and
autodiff methods for high performance on large and sparse systems.
- `LevenbergMarquardt()`: An advanced Levenberg-Marquardt implementation with the
improvements suggested in the [paper](https://arxiv.org/abs/1201.5885) "Improvements to
the Levenberg-Marquardt algorithm for nonlinear least-squares minimization". Designed for
large-scale and numerically-difficult nonlinear systems.
- `PseudoTransient()`: A pseudo-transient method which mixes the stability of Euler-type
stepping with the convergence speed of a Newton method. Good for highly unstable
systems.
- `RobustMultiNewton()`: A polyalgorithm that mixes highly robust methods (line searches and
trust regions) in order to be as robust as possible for difficult problems. If this method
fails to converge, then one can be pretty certain that most (all?) other choices would
likely fail.
- `FastShortcutNonlinearPolyalg()`: The default method. A polyalgorithm that mixes fast methods
with fallbacks to robust methods to allow for solving easy problems quickly without sacrificing
robustness on the hard problems.
- `Broyden()`: Generalization of Broyden's Quasi-Newton Method with Line Search and
Automatic Jacobian Resetting. This is a fast method but unstable when the condition number of
the Jacobian matrix is sufficiently large.
- `Klement()`: Generalization of Klement's Quasi-Newton Method with Line Search and
Automatic Jacobian Resetting. This is a fast method but unstable when the condition number of
the Jacobian matrix is sufficiently large.
- `LimitedMemoryBroyden()`: An advanced version of `LBroyden` which uses a limited memory
Broyden method. This is a fast method but unstable when the condition number of
the Jacobian matrix is sufficiently large. It is recommended to use `Broyden` or
`Klement` instead unless the memory usage is a concern.

### SimpleNonlinearSolve.jl

These methods are included with NonlinearSolve.jl by default, though SimpleNonlinearSolve.jl
can be used directly to reduce dependencies and improve load times. SimpleNonlinearSolve.jl's
methods excel at small problems and problems defined with static arrays.

- `SimpleNewtonRaphson()`: A simplified implementation of the Newton-Raphson method.
- `Broyden()`: The classic Broyden's quasi-Newton method.
- `LBroyden()`: A low-memory Broyden implementation, similar to L-BFGS. This method is
common in machine learning contexts but is known to be unstable in comparison to many
other choices.
- `Klement()`: A quasi-Newton method due to Klement. It's supposed to be more efficient
than Broyden's method, and it seems to be in the cases that have been tried, but more
benchmarking is required.
- `SimpleTrustRegion()`: A dogleg trust-region Newton method. Improved globalizing stability
for more robust fitting over basic Newton methods, though potentially with a cost.
- `SimpleDFSane()`: A low-overhead implementation of the df-sane method for solving
large-scale nonlinear systems of equations.
- `SimpleHalley()`: A low-overhead implementation of the Halley method. This is a higher order
method and thus can converge faster to low tolerances than a Newton method. Requires higher
order derivatives, so best used when automatic differentiation is available.
- `SimpleNewtonRaphson()`: A simplified implementation of the Newton-Raphson method.
- `SimpleBroyden()`: The classic Broyden's quasi-Newton method.
- `SimpleLimitedMemoryBroyden()`: A low-memory Broyden implementation, similar to L-BFGS. This method is
common in machine learning contexts but is known to be unstable in comparison to many
other choices.
- `SimpleKlement()`: A quasi-Newton method due to Klement. It's supposed to be more efficient
than Broyden's method, and it seems to be in the cases that have been tried, but more
benchmarking is required.
- `SimpleTrustRegion()`: A dogleg trust-region Newton method. Improved globalizing stability
for more robust fitting over basic Newton methods, though potentially with a cost.
- `SimpleDFSane()`: A low-overhead implementation of the df-sane method for solving
large-scale nonlinear systems of equations.
- `SimpleHalley()`: A low-overhead implementation of the Halley method. This is a higher order
method and thus can converge faster to low tolerances than a Newton method. Requires higher
order derivatives, so best used when automatic differentiation is available.

!!! note

Expand All @@ -110,39 +110,40 @@ SteadyStateDiffEq.jl uses ODE solvers to iteratively approach the steady state.
very stable method for solving nonlinear systems, though often more
computationally expensive than direct methods.

- `DynamicSS()` : Uses an ODE solver to find the steady state. Automatically
terminates when close to the steady state.
- `DynamicSS()`: Uses an ODE solver to find the steady state. Automatically terminates when
close to the steady state.
- `SSRootfind()`: Uses a NonlinearSolve compatible solver to find the steady state.

### SciMLNLSolve.jl

This is a wrapper package for importing solvers from NLsolve.jl into the SciML interface.

- `NLSolveJL()`: A wrapper for [NLsolve.jl](https://github.com/JuliaNLSolvers/NLsolve.jl)
- `NLSolveJL()`: A wrapper for [NLsolve.jl](https://github.com/JuliaNLSolvers/NLsolve.jl)

Submethod choices for this algorithm include:

- `:anderson`: Anderson-accelerated fixed-point iteration
- `:newton`: Classical Newton method with an optional line search
- `:trust_region`: Trust region Newton method (the default choice)
- `:anderson`: Anderson-accelerated fixed-point iteration
- `:newton`: Classical Newton method with an optional line search
- `:trust_region`: Trust region Newton method (the default choice)

### MINPACK.jl

MINPACK.jl methods are good for medium-sized nonlinear solves. It does not scale due to
the lack of sparse Jacobian support, though the methods are very robust and stable.

- `CMINPACK()`: A wrapper for using the classic MINPACK method through [MINPACK.jl](https://github.com/sglyon/MINPACK.jl)
- `CMINPACK()`: A wrapper for using the classic MINPACK method through [MINPACK.jl](https://github.com/sglyon/MINPACK.jl)

Submethod choices for this algorithm include:

- `:hybr`: Modified version of Powell's algorithm.
- `:lm`: Levenberg-Marquardt.
- `:lmdif`: Advanced Levenberg-Marquardt
- `:hybrd`: Advanced modified version of Powell's algorithm
- `:hybr`: Modified version of Powell's algorithm.
- `:lm`: Levenberg-Marquardt.
- `:lmdif`: Advanced Levenberg-Marquardt
- `:hybrd`: Advanced modified version of Powell's algorithm

### Sundials.jl

Sundials.jl are a classic set of C/Fortran methods which are known for good scaling of the
Newton-Krylov form. However, KINSOL is known to be less stable than some other
implementations, as it has no line search or globalizer (trust region).

- `KINSOL()`: The KINSOL method of the SUNDIALS C library
- `KINSOL()`: The KINSOL method of the SUNDIALS C library
17 changes: 7 additions & 10 deletions docs/src/solvers/SteadyStateSolvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,13 @@ SteadyStateDiffEq.jl uses ODE solvers to iteratively approach the steady state.
very stable method for solving nonlinear systems,
though often computationally more expensive than direct methods.

- `DynamicSS` : Uses an ODE solver to find the steady state. Automatically
terminates when close to the steady state.
`DynamicSS(alg;abstol=1e-8,reltol=1e-6,tspan=Inf)` requires that an
ODE algorithm is given as the first argument. The absolute and
relative tolerances specify the termination conditions on the
derivative's closeness to zero. This internally uses the
`TerminateSteadyState` callback from the Callback Library. The
simulated time, for which the ODE is solved, can be limited by
`tspan`. If `tspan` is a number, it is equivalent to passing
`(zero(tspan), tspan)`.
- `DynamicSS` : Uses an ODE solver to find the steady state. Automatically terminates when
close to the steady state. `DynamicSS(alg; abstol=1e-8, reltol=1e-6, tspan=Inf)` requires
that an ODE algorithm is given as the first argument. The absolute and relative tolerances
specify the termination conditions on the derivative's closeness to zero. This internally
uses the `TerminateSteadyState` callback from the Callback Library. The simulated time,
for which the ODE is solved, can be limited by `tspan`. If `tspan` is a number, it is
equivalent to passing `(zero(tspan), tspan)`.

Example usage:

Expand Down
Loading

0 comments on commit 9a03a0a

Please sign in to comment.