This library provides a wrapper for the Nlopt C library, which is a library for non-linear optimizations. It provides many different algorithms for easy comparison separated into local / gradient based, global / gradient based, local / derivative free and global / derivative free algorithms.
This library is part of the Nimble repository. You can install it just like any other Nimble package:
nimble install nlopt
For general usage look at the test case in tests/tsimple.nim. Despite the name it should be reasonably concise to understand how the library interface is used. For an explanation of the different Nim procedures and types, see Documentation below.
The C library has an “object oriented like” structure. A central
nlopt_opt
type, which is created by the user. This object needs to
be given to all other nlopt
functions, of which there are many.
See the documentation of the C library here:
https://nlopt.readthedocs.io/en/latest/NLopt_Reference/
Here we only cover the functionality related to the usage of the Nim library.
The Nim library partly follows that structure in the sense that it
provides a slightly more astract NloptOpt
type. This type stores the
user defined settings (algorithm, dimensionality, bounds, step sizes,
etc.) and writes them to the C library before the call to the
optimization function.
Instead of depending on a large amount of different getter and setter
functions, the user sets all parameters on the NloptOpt
object.
Some functionality is still missing from the Nim interface. General optimization with and without gradients, including inequality bounds is available already. The other functionality can be used using the exported wrapper functions. In that case care needs to be taken about the data types.
The number of functions / types used by the user is reduced to 5 functions / 3 types:
As the name suggests, used to create a new NloptOpt
object.
Signature:
proc newNloptOpt(opt_name: string, nDims: int, bounds: seq[tuple[l, u: float]] = @[]): NloptOpt
Needs the algorithm opt_name
as a string (to be changed to a pure enum), the
dimensionality of the problem nDims
and potential bounds on each
dimension. The bounds are given as a sequence of tuples. Each tuple
corresponds to one dimension of nDims
with (lower bound, upper
bound)
of that dimension. Either empty, or bounds for all dimensions
need to be given. If only a single dimension is to be bounded, set the
other bounds to -Inf
and Inf
.
To create a new VarStruct
object. It is used to wrap the user
defined optimization function and arbitrary data to be used in
that function.
Signature:
proc newVarStruct[T, U](uFunc: T, data: U): VarStruct[U]
due to a bug (?), we cannot constrain T
unfortunately. T
needs to
be either a FuncProto
or a FuncProtoGrad
. See below for an
explanation of these. Thus the custom function following either
type will be haneded as uFunc
.
The data
argument is an arbitrary user defined object, which is used
to hand data to the user defined function. To be precise, it is the
object, which will be given to FuncProto
or FuncProtoGrad
as
explained below.
One generic primitive procedure type, which the custom function to be optimized needs to match. This is the type to be used for algorithms, which are gradient free (i.e. the user does not have to calculate the gradients manually). Signature:
FuncProto[T] = proc (p: seq[float], func_data: T): float
The first argument p
are the parameters, which are optimized. The
generic type func_data
is an arbitrary type, which usually contains
the data on which the optmization is done. But it can be any type, so
it allows arbitrary data to be injected into the function, which can
be used for the calculation. Internally FuncProto
is wrapped by a
function following the nlopt_func
signature. The func_data
is
handed as a raw pointer to the C library and from there upon a
function evaluation back to Nim.
The return value of the function must be the value of the function after evaluation.
One generic primitive procedure type, which the custom function to be optimized needs to match. This is the type to be used for algorithms, which are not gradient free (i.e. the user does have to calculate the gradients manually).
FuncProtoGrad[T] = proc (p: seq[float], func_data: T): (float, seq[float])
The signature of FuncProtoGrad
is identical to FuncProto
(the p
and func_data
arguments serve the same purpose, read above for an
explanation), with the exception of the return type.
The second element of the return tuple has to be the gradients of the function with respect to the parameters. These will be returned back to NLopt to take into account for the calculation of the next parameters.
In many cases it may be enough to perform numeric differentiation, e.g. via the symmetric difference quotient.
A possible implementation might look something like:
proc myFunc(p: seq[float], fitObj: FitObject): (float, seq[float]) =
## `FitObject` takes the place of `func_data`.
## In this case:
## type
## FitObject = object
## x, y: seq[float]
# NOTE: do not need last gradients
let x = fitObj.x
let y = fitObj.y
# a seq for the resulting gradients
var gradRes = newSeq[float](p.len)
# a float for the function evaluation at `p`
var res = 0.0
# a variable for the small change we take in `p_i`
var h: float
# a temp variable for the individual part of a `Chi^2` sum
var diff = 0.0
proc fn(params: seq[float]): float =
# calculate the model's Y position to be used to perform a curve fit
# of `funcToCall` to `(x, y)` data
let fitY = x.mapIt(funcToCall(params, it))
for i in 0 .. x.high:
diff = (y[i] - fitY[i]) / yErr[i]
result += pow(diff, 2.0)
# result of our internal `f(p)` is the reduced Pearson's Chi^2
result = result / (x.len - p.len).float
# the function evaluation is simply our `Chi^2` value of the parameters
res = fn(p)
# now calculate the numerical derivative
for i in 0 .. gradRes.high:
# calc some reasonable `h` for this parameter
h = p[i] * sqrt(epsilon(float64))
var
modParsUp = p
modParsDown = p
modParsUp[i] = p[i] + h
modParsDown[i] = p[i] - h
# numerical partial derivative according to `symmetric difference quotient`
gradRes[i] = (fn(modParsUp) - fn(modParsDown)) / (2.0 * h)
result = (res, gradRes)
The above can be easily wrapped in a template for instance to lift any
function funcToCall
to be fitable via Pearson’s chi-squared test.
VarStruct
is the unified container, which stores the user defined
function of either type above and arbitrary user data, which will be
handed to that function during each evaluation of the function.
Signature:
VarStruct*[T] = ref object
case kind*: FuncKind:
of NoGrad:
userFunc*: FuncProto[T]
of Grad:
userFuncGrad*: FuncProtoGrad[T]
data*: T
# where ``FuncKind`` is
FuncKind* {.pure.} = enum
NoGrad, Grad
It’s a variant object, which either stores a FuncProto
if the type
is NoGrad
or a FuncProtoGrad
if it is Grad
. In principle the
newVarStruct
does not need to be used. One can create such a variant
object manually, but needs to take care to:
- set the
kind
field accordingly - use the correct field name for the user function for this type.
This is what the newVarStruct
procedure takes care of.
This procedure is used to set the set the FuncProco(Grad)
function
as the nlopt_func
of the C nlopt_opt
object. If performs the
wrapping of the user function into a suitable nlopt_func
. In
addition it also sets the data, which will be given to the user
defined function.
Signature:
proc setFunction[T](nlopt: var NloptOpt, vStruct: var VarStruct[T])
The first argument is the optmizer and vStruct
is the user created
VarStruct
object. It is a var
argument as well, since we want to
avoid copying the data internally.
Used to add inequality constraints to the optimization problem. The
signature is exactly the same as for setFunction
. One also creates a
custom constraints function and a corresponding VarStruct
object. This constraints function will be called in between calls to
the actual function to be optimized. There may be one constraint for each
dimension. See the Nlopt doc for more information.
Signature:
proc addInequalityConstraint*[T](nlopt: var NloptOpt, vStruct: var VarStruct[T])
see the setFunction
explanation above.
The actual function, which starts the optimization routine after
everything has been set up. It sets all additional parameters of the
NloptOpt
(tolerances, step sizes etc.) before calling the actual
nlopt_optimize
function.
Signature:
proc optimize*[T](nlopt: var NloptOpt, params: seq[T]): tuple[p: seq[float], f: float] =
The first parameter is the configures NloptOpt
object. params
is
the initial guess for the parameters to be optmized.
After optimization the status of the optimization will be stored in
the status
field of the nlopt
object.
The return value is a tuple of the sequence of optmized parameters p
and the function value after the last evaluation of the function f
.
The license of the C library is found in the c_header folder, which contains the headers as they were wrapped using c2nim.
The Nim code is published under the MIT license.