Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace InstrumentedFunction by ExperimentFunction (different API) #421

Merged
merged 27 commits into from
Jan 6, 2020
Merged
Show file tree
Hide file tree
Changes from 26 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@
### Breaking changes

- `Instrumentation` is now a `Variable` for simplicity and flexibility. The `Variable` API has therefore heavily changed, and more (bigger yet) changes are coming. This should only impact custom-made variables.
- `InstrumentedFunction` has been aggressively deprecated in favor of using the `Instrumentation` directly at the optimizer initialization,
and of using `ExperimentFunction` to define functions to be used in benchmarks. Main differences are:
* `instrumentation` attribute is renamed to `parametrization` for forward compatibility.
* `__init__` takes exactly two arguments (main function and parametrization/instrumentation) and
* calls to `__call__` is directly forwarded to the main function (instead of converting from data space),
- `Candidates` have now a `uid` instead of a `uuid` for compatibility reasons.
- Update archive `keys/items_as_array` methods to `keys/items_as_arrays` for consistency.

Expand Down
7 changes: 6 additions & 1 deletion docs/benchmarking.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,11 @@ python -m nevergrad.benchmark additional_experiment --imports=nevergrad/benchmar
```
See the [example file](../nevergrad/benchmark/additional/example.py) to understand more precisely how functions/optimizers/experiments are specified. You can also submit a pull request to add your code directly in `nevergrad`. In this case, please refer to these [guidelines](adding_an_algorithm.md).

Functions used for the experiments must derive from `nevergrad.instrumentation.InstrumentedFunction`. This class keeps a dictionary of descriptors of your function settings through the `_descriptors` attribute, which is used to create the columns of the data file produced by the experiments. See the docstrings for more information, and [functionlib.py](../nevergrad/functions/arcoating/core.py) and [example.py](../nevergrad/benchmark/additional/example.py) for examples.
Functions used for the experiments must derive from `nevergrad.functions.ExperimentFunction`. This class implements features necessary for the benchmarks:
- keeps the parametrization of the function, used for instantiating the optimizers.
- keeping a dictionary of descriptors of your function settings through the `_descriptors` attribute, which is used to create the columns of the data file produced by the experiments
- let's you override methods allowing custom behaviors such as `evaluation_function` called at evaluation time to possibly avoid noise when possible, and `compute_pseudotime` to mock computation time during benchkmarks.

See the docstrings for more information, and [functionlib.py](../nevergrad/functions/arcoating/core.py) and [example.py](../nevergrad/benchmark/additional/example.py) for examples.

If you want your experiment plan to be seedable, be extra careful as to how you handle randomness in the experiment generator, since each individual experiment may be run in any order. See [experiments.py](../nevergrad/benchmark/experiments.py) for examples of seedable experiment plans. If you do not care for it. For simplicity's sake, the experiment plan generator is however not required to have a seed parameter (but will not be reproducible in this case).
7 changes: 1 addition & 6 deletions docs/instrumentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

The aim of instrumentation is to turn a piece of code with parameters you want to optimize into a function defined on an n-dimensional continuous data space in which the optimization can easily be performed. For this, discrete/categorical arguments must be transformed to continuous variables, and all variables concatenated. The instrumentation subpackage will help you do thanks to:
- the `variables` modules providing priors that can be used to define each argument.
- the `Instrumentation`, and `InstrumentedFunction` classes which provide an interface for converting any arguments into the data space used for optimization, and convert from data space back to the arguments space.
- the `Instrumentation` class which provide an interface for converting any arguments into the data space used for optimization, and convert from data space back to the arguments space.
- the `FolderFunction` which helps transform any code into a Python function in a few lines. This can be especially helpful to optimize parameters in non-Python 3.6+ code (C++, Octave, etc...) or parameters in scripts.


Expand Down Expand Up @@ -55,11 +55,6 @@ print(instrum.data_to_arguments([1, -80, -80, 80, 3]))
# value=7 because 3 * std + mean = 7
```


For convenience and until a better way is implemented (see future notice), we provide an `InstrumentedFunction` class converting a function of any parameter space into the data space. Here is a basic example of its use:

**Future notice**: `InstrumentedFunction` may come to disappear (or at least we discourage its use) since a new API for instrumenting on the optimizer side has been added in v0.2.0.

You can then directly perform optimization on a function given its instrumentation:
```python
def myfunction(arg1, arg2, arg3, value=3):
Expand Down
10 changes: 5 additions & 5 deletions nevergrad/benchmark/additional/example.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
# LICENSE file in the root directory of this source tree.

import nevergrad as ng
from nevergrad import instrumentation as inst
from nevergrad.functions import ArtificialFunction
from nevergrad import functions as ngfuncs
from nevergrad.benchmark import registry as xpregistry
from nevergrad.benchmark import Experiment
# this file implements:
Expand All @@ -15,12 +14,13 @@
# it can be used with the --imports parameters if nevergrad.benchmark commandline function


class CustomFunction(inst.InstrumentedFunction):
class CustomFunction(ngfuncs.ExperimentFunction):
"""Example of a new test function
"""

def __init__(self, offset):
super().__init__(self.oracle_call, ng.var.Scalar())
super().__init__(self.oracle_call, ng.Instrumentation(ng.var.Scalar()))
self.register_initialization(offset=offset) # to create equivalent instances through "copy"
self.offset = offset
# add your own function descriptors (from base class, we already get "dimension" etc...)
# those will be recorded during benchmarks
Expand All @@ -40,7 +40,7 @@ class NewOptimizer(ng.optimizers.registry["NoisyBandit"]):

@xpregistry.register # register experiments in the experiment registry
def additional_experiment(): # The signature can also include a seed argument if need be (see experiments.py)
funcs = [ArtificialFunction(name="sphere", block_dimension=10), CustomFunction(2)]
funcs = [ngfuncs.ArtificialFunction(name="sphere", block_dimension=10), CustomFunction(2)]
for budget in [10, 100]:
for optimizer in ["NewOptimizer", "RandomSearch"]:
for func in funcs: # 2 realizations of the same function
Expand Down
7 changes: 4 additions & 3 deletions nevergrad/benchmark/execution.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
import heapq
from collections import deque
from typing import List, Callable, Any, NamedTuple, Tuple, Dict, Optional, Deque
from ..functions import PostponedObject # this object only serves to provide delays that the executor must use to order jobs
from nevergrad.functions import ExperimentFunction
# this object only serves to provide delays that the executor must use to order jobs


class MockedTimedJob:
Expand Down Expand Up @@ -42,8 +43,8 @@ def process(self) -> None:
self._output = self._func(*self._args, **self._kwargs)
# compute the delay and add to queue
self._delay = 1.
if isinstance(self._func, PostponedObject):
self._delay = max(0, self._func.get_postponing_delay(self._args, self._kwargs, self._output))
if isinstance(self._func, ExperimentFunction):
self._delay = max(0, self._func.compute_pseudotime((self._args, self._kwargs), self._output))

def result(self) -> Any:
"""Return the result if "done()" is true, and raises
Expand Down
Loading