diff --git a/docs/source/algorithms.md b/docs/source/algorithms.md
index 006e25402..4ec46b6b5 100644
--- a/docs/source/algorithms.md
+++ b/docs/source/algorithms.md
@@ -392,7 +392,7 @@ install optimagic.
.. warning::
In our benchmark using a quadratic objective function, the trust_constr
algorithm did not find the optimum very precisely (less than 4 decimal places).
- If you require high precision, you should refine an optimum found with Powell
+ If you require high precision, you should refine an optimum found with trust_constr
with another local optimizer.
.. note::
@@ -907,12 +907,6 @@ We implement a few algorithms from scratch. They are currently considered experi
and therefore may require fewer iterations to arrive at a local optimum than
Nelder-Mead.
- The criterion function :func:`func` should return a dictionary with the following
- fields:
-
- 1. ``"value"``: The sum of squared (potentially weighted) errors.
- 2. ``"root_contributions"``: An array containing the root (weighted) contributions.
-
Scaling the problem is necessary such that bounds correspond to the unit hypercube
:math:`[0, 1]^n`. For unconstrained problems, scale each parameter such that unit
changes in parameters result in similar order-of-magnitude changes in the criterion
@@ -1015,12 +1009,6 @@ need to have [petsc4py](https://pypi.org/project/petsc4py/) installed.
and therefore may require fewer iterations to arrive at a local optimum than
Nelder-Mead.
- The criterion function :func:`func` should return a dictionary with the following
- fields:
-
- 1. ``"value"``: The sum of squared (potentially weighted) errors.
- 2. ``"root_contributions"``: An array containing the root (weighted) contributions.
-
Scaling the problem is necessary such that bounds correspond to the unit hypercube
:math:`[0, 1]^n`. For unconstrained problems, scale each parameter such that unit
changes in parameters result in similar order-of-magnitude changes in the criterion
diff --git a/docs/source/how_to/how_to_algorithm_selection.ipynb b/docs/source/how_to/how_to_algorithm_selection.ipynb
index d6bbeb15c..0dfc58307 100644
--- a/docs/source/how_to/how_to_algorithm_selection.ipynb
+++ b/docs/source/how_to/how_to_algorithm_selection.ipynb
@@ -52,7 +52,7 @@
" E[\"Can you exploit
a least-squares
structure?\"] -- yes --> F[\"differentiable?\"]\n",
" E[\"Can you exploit
a least-squares
structure?\"] -- no --> G[\"differentiable?\"]\n",
"\n",
- " F[\"differentiable?\"] -- yes --> H[\"scipy_ls_lm
scipy_ls_trf
scipy_ls_dogleg\"]\n",
+ " F[\"differentiable?\"] -- yes --> H[\"scipy_ls_lm
scipy_ls_trf
scipy_ls_dogbox\"]\n",
" F[\"differentiable?\"] -- no --> I[\"nag_dflos
pounders
tao_pounders\"]\n",
"\n",
" G[\"differentiable?\"] -- yes --> J[\"scipy_lbfgsb
nlopt_lbfgsb
fides\"]\n",
diff --git a/src/optimagic/optimizers/_pounders/gqtpar.py b/src/optimagic/optimizers/_pounders/gqtpar.py
index bf9eb68dd..04648f7f9 100644
--- a/src/optimagic/optimizers/_pounders/gqtpar.py
+++ b/src/optimagic/optimizers/_pounders/gqtpar.py
@@ -55,7 +55,7 @@ def gqtpar(model, x_candidate, *, k_easy=0.1, k_hard=0.2, maxiter=200):
- ``linear_terms``, a np.ndarray of shape (n,) and
- ``square_terms``, a np.ndarray of shape (n,n).
x_candidate (np.ndarray): Initial guess for the solution of the subproblem.
- k_easy (float): topping criterion for the "easy" case.
+ k_easy (float): Stopping criterion for the "easy" case.
k_hard (float): Stopping criterion for the "hard" case.
maxiter (int): Maximum number of iterations to perform. If reached,
terminate.
diff --git a/src/optimagic/optimizers/_pounders/pounders_auxiliary.py b/src/optimagic/optimizers/_pounders/pounders_auxiliary.py
index d0f167c2f..223c028fe 100644
--- a/src/optimagic/optimizers/_pounders/pounders_auxiliary.py
+++ b/src/optimagic/optimizers/_pounders/pounders_auxiliary.py
@@ -240,7 +240,7 @@ def solve_subproblem(
gtol_rel_conjugate_gradient (float): Convergence tolerance for the relative
gradient norm in the conjugate gradient step of the trust-region
subproblem ("bntr").
- k_easy (float): topping criterion for the "easy" case in the trust-region
+ k_easy (float): Stopping criterion for the "easy" case in the trust-region
subproblem ("gqtpar").
k_hard (float): Stopping criterion for the "hard" case in the trust-region
subproblem ("gqtpar").
diff --git a/src/optimagic/optimizers/pounders.py b/src/optimagic/optimizers/pounders.py
index 24b8b723a..87b652225 100644
--- a/src/optimagic/optimizers/pounders.py
+++ b/src/optimagic/optimizers/pounders.py
@@ -262,7 +262,7 @@ def internal_solve_pounders(
gtol_rel_conjugate_gradient_sub (float): Convergence tolerance for the
relative gradient norm in the conjugate gradient step of the trust-region
subproblem if "cg" is used as ``conjugate_gradient_method_sub`` ("bntr").
- k_easy_sub (float): topping criterion for the "easy" case in the trust-region
+ k_easy_sub (float): Stopping criterion for the "easy" case in the trust-region
subproblem ("gqtpar").
k_hard_sub (float): Stopping criterion for the "hard" case in the trust-region
subproblem ("gqtpar").