Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
Add GP Tuner and related doc (#1191)
Browse files Browse the repository at this point in the history
* fix link err in docs

* add spaces

* re-organise links for detailed descriptions of the tuners and accessors; fix link err in HpoComparision.md

* add in-page link by change .md to .html

* delete #section from cross-file links to make links work in both readthedocs and github docs

* gp_tuner init from fmfn's repo

* fix params bug by adding float>int transition

* add optimal choices; support randint&quniform type; add doc

* refine doc and code

* change mnist yml comments

* typo fix

* fix val err

* fix minimize mode err

* add config test and Hpo result

* support quniform type; update doc; update test config

* update doc

* un-commit changed in yarn.lock

* fix optimize mode bug

* optimize mode

* optimize mode

* reset pylint, gitignore

* revert .gitignore yarn.lock
  • Loading branch information
suiguoxin authored and leckie-chn committed Jun 25, 2019
1 parent c217992 commit a587648
Show file tree
Hide file tree
Showing 21 changed files with 679 additions and 13 deletions.
44 changes: 43 additions & 1 deletion docs/en_US/BuiltinTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Currently we support the following algorithms:
|[__Network Morphism__](#NetworkMorphism)|Network Morphism provides functions to automatically search for architecture of deep learning models. Every child network inherits the knowledge from its parent network and morphs into diverse types of networks, including changes of depth, width, and skip-connection. Next, it estimates the value of a child network using the historic architecture and metric pairs. Then it selects the most promising one to train. [Reference Paper](https://arxiv.org/abs/1806.10282)|
|[__Metis Tuner__](#MetisTuner)|Metis offers the following benefits when it comes to tuning parameters: While most tools only predict the optimal configuration, Metis gives you two outputs: (a) current prediction of optimal configuration, and (b) suggestion for the next trial. No more guesswork. While most tools assume training datasets do not have noisy data, Metis actually tells you if you need to re-sample a particular hyper-parameter. [Reference Paper](https://www.microsoft.com/en-us/research/publication/metis-robustly-tuning-tail-latencies-cloud-systems/)|
|[__BOHB__](#BOHB)|BOHB is a follow-up work of Hyperband. It targets the weakness of Hyperband that new configurations are generated randomly without leveraging finished trials. For the name BOHB, HB means Hyperband, BO means Byesian Optimization. BOHB leverages finished trials by building multiple TPE models, a proportion of new configurations are generated through these models. [Reference Paper](https://arxiv.org/abs/1807.01774)|

|[__GP Tuner__](#GPTuner)|Gaussian Process Tuner is a sequential model-based optimization (SMBO) approach with Gaussian Process as the surrogate. [Reference Paper, ](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf)[Github Repo](https://github.com/fmfn/BayesianOptimization)|
<br>

## Usage of Builtin Tuners
Expand Down Expand Up @@ -366,3 +366,45 @@ advisor:
max_budget: 27
eta: 3
```
<br>

<a name="GPTuner"></a>

![](https://placehold.it/15/1589F0/000000?text=+) `GP Tuner`

> Builtin Tuner Name: **GPTuner**

Note that the only acceptable types of search space are `choice`, `randint`, `uniform`, `quniform`, `loguniform`, `qloguniform`.

**Suggested scenario**

As a strategy in Sequential Model-based Global Optimization(SMBO) algorithm, GP Tuner uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore GP Tuner is most adequate for situations where the function to be optimized is a very expensive endeavor. GP can be used when the computation resource is limited. While GP Tuner has a computationoal cost that grows at *O(N^3)* due to the requirement of inverting the Gram matrix, so it's not suitable when lots of trials are needed. [Detailed Description](./GPTuner.md)

**Requirement of classArg**

* **optimize_mode** (*'maximize' or 'minimize', optional, default = 'maximize'*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.
* **utility** (*'ei', 'ucb' or 'poi', optional, default = 'ei'*) - The kind of utility function(acquisition function). 'ei', 'ucb' and 'poi' corresponds to 'Expected Improvement', 'Upper Confidence Bound' and 'Probability of Improvement' respectively.
* **kappa** (*float, optional, default = 5*) - Used by utility function 'ucb'. The bigger `kappa` is, the more the tuner will be exploratory.
* **xi** (*float, optional, default = 0*) - Used by utility function 'ei' and 'poi'. The bigger `xi` is, the more the tuner will be exploratory.
* **nu** (*float, optional, default = 2.5*) - Used to specify Matern kernel. The smaller nu, the less smooth the approximated function is.
* **alpha** (*float, optional, default = 1e-6*) - Used to specify Gaussian Process Regressor. Larger values correspond to increased noise level in the observations.
* **cold_start_num** (*int, optional, default = 10*) - Number of random exploration to perform before Gaussian Process. Random exploration can help by diversifying the exploration space.
* **selection_num_warm_up** (*int, optional, default = 1e5*) - Number of random points to evaluate for getting the point which maximizes the acquisition function.
* **selection_num_starting_points** (*int, optional, default = 250*) - Nnumber of times to run L-BFGS-B from a random starting point after the warmup.

**Usage example**

```yaml
# config.yml
tuner:
builtinTunerName: GPTuner
classArgs:
optimize_mode: maximize
kappa: 5
xi: 0
nu: 2.5
alpha: 1e-6
cold_start_num: 10
selection_num_warm_up: 100000
selection_num_starting_points: 250
```
5 changes: 4 additions & 1 deletion docs/en_US/CommunitySharings/HpoComparision.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,8 +98,11 @@ The total search space is 1,204,224, we set the number of maximum trial to 1000.
| HyperBand |0.414065|0.415222|0.417628|
| HyperBand |0.416807|0.417549|0.418828|
| HyperBand |0.415550|0.415977|0.417186|
| GP |0.414353|0.418563|0.420263|
| GP |0.414395|0.418006|0.420431|
| GP |0.412943|0.416566|0.418443|

For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.
In this example, all the algorithms are used with default parameters. For Metis, there are about 300 trials because it runs slowly due to its high time complexity O(n^3) in Gaussian Process.

## RocksDB Benchmark 'fillrandom' and 'readrandom'

Expand Down
10 changes: 10 additions & 0 deletions docs/en_US/GPTuner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
GP Tuner on NNI
===

## GP Tuner

Bayesian optimization works by constructing a posterior distribution of functions (Gaussian Process here) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not.

GP Tuner is designed to minimize/maximize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor.

This optimization approach is described in Section 3 of [Algorithms for Hyper-Parameter Optimization](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf).
1 change: 1 addition & 0 deletions docs/en_US/SearchSpaceSpec.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ All types of sampling strategies and their parameter are listed here:
| Grid Search Tuner | &#10003; | | | &#10003; | | &#10003; | | | | |
| Hyperband Advisor | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; |
| Metis Tuner | &#10003; | &#10003; | &#10003; | &#10003; | | | | | | |
| GP Tuner | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | &#10003; | | | | |


Known Limitations:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/auto-gbdt/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/auto-gbdt/config_metis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space_metis.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: MetisTuner
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/auto-gbdt/config_pai.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/mnist/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/mnist/config_assessor.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/mnist/config_frameworkcontroller.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/mnist/config_kubeflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/mnist/config_pai.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/mnist/config_windows.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
#SMAC (SMAC should be installed through nnictl)
builtinTunerName: TPE
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion src/nni_manager/rest_server/restValidationSchemas.ts
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ export namespace ValidationSchemas {
checkpointDir: joi.string().allow('')
}),
tuner: joi.object({
builtinTunerName: joi.string().valid('TPE', 'Random', 'Anneal', 'Evolution', 'SMAC', 'BatchTuner', 'GridSearch', 'NetworkMorphism', 'MetisTuner'),
builtinTunerName: joi.string().valid('TPE', 'Random', 'Anneal', 'Evolution', 'SMAC', 'BatchTuner', 'GridSearch', 'NetworkMorphism', 'MetisTuner', 'GPTuner'),
codeDir: joi.string(),
classFileName: joi.string(),
className: joi.string(),
Expand Down
4 changes: 3 additions & 1 deletion src/sdk/pynni/nni/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@
'GridSearch': 'nni.gridsearch_tuner.gridsearch_tuner',
'NetworkMorphism': 'nni.networkmorphism_tuner.networkmorphism_tuner',
'Curvefitting': 'nni.curvefitting_assessor.curvefitting_assessor',
'MetisTuner': 'nni.metis_tuner.metis_tuner'
'MetisTuner': 'nni.metis_tuner.metis_tuner',
'GPTuner': 'nni.gp_tuner.gp_tuner'
}

ClassName = {
Expand All @@ -42,6 +43,7 @@
'GridSearch': 'GridSearchTuner',
'NetworkMorphism':'NetworkMorphismTuner',
'MetisTuner':'MetisTuner',
'GPTuner':'GPTuner',

'Medianstop': 'MedianstopAssessor',
'Curvefitting': 'CurvefittingAssessor'
Expand Down
Empty file.
170 changes: 170 additions & 0 deletions src/sdk/pynni/nni/gp_tuner/gp_tuner.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# Copyright (c) Microsoft Corporation
# All rights reserved.
#
# MIT License
#
# Permission is hereby granted, free of charge,
# to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and
# to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING
# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
gp_tuner.py
'''

import warnings
import logging
import numpy as np

from sklearn.gaussian_process.kernels import Matern
from sklearn.gaussian_process import GaussianProcessRegressor

from nni.tuner import Tuner
from nni.utils import OptimizeMode, extract_scalar_reward

from .target_space import TargetSpace
from .util import UtilityFunction, acq_max

logger = logging.getLogger("GP_Tuner_AutoML")


class GPTuner(Tuner):
'''
GPTuner
'''

def __init__(self, optimize_mode="maximize", utility='ei', kappa=5, xi=0, nu=2.5, alpha=1e-6, cold_start_num=10,
selection_num_warm_up=100000, selection_num_starting_points=250):
self.optimize_mode = OptimizeMode(optimize_mode)

# utility function related
self.utility = utility
self.kappa = kappa
self.xi = xi

# target space
self._space = None

self._random_state = np.random.RandomState()

# nu, alpha are GPR related params
self._gp = GaussianProcessRegressor(
kernel=Matern(nu=nu),
alpha=alpha,
normalize_y=True,
n_restarts_optimizer=25,
random_state=self._random_state
)
# num of random evaluations before GPR
self._cold_start_num = cold_start_num

# params for acq_max
self._selection_num_warm_up = selection_num_warm_up
self._selection_num_starting_points = selection_num_starting_points

# num of imported data
self.supplement_data_num = 0

def update_search_space(self, search_space):
"""Update the self.bounds and self.types by the search_space.json
Parameters
----------
search_space : dict
"""
self._space = TargetSpace(search_space, self._random_state)

def generate_parameters(self, parameter_id):
"""Generate next parameter for trial
If the number of trial result is lower than cold start number,
gp will first randomly generate some parameters.
Otherwise, choose the parameters by the Gussian Process Model
Parameters
----------
parameter_id : int
Returns
-------
result : dict
"""
if self._space.len() < self._cold_start_num:
results = self._space.random_sample()
else:
# Sklearn's GP throws a large number of warnings at times, but
# we don't really need to see them here.
with warnings.catch_warnings():
warnings.simplefilter("ignore")
self._gp.fit(self._space.params, self._space.target)

util = UtilityFunction(
kind=self.utility, kappa=self.kappa, xi=self.xi)

results = acq_max(
f_acq=util.utility,
gp=self._gp,
y_max=self._space.target.max(),
bounds=self._space.bounds,
space=self._space,
num_warmup=self._selection_num_warm_up,
num_starting_points=self._selection_num_starting_points
)

results = self._space.array_to_params(results)
logger.info("Generate paramageters:\n %s", results)
return results

def receive_trial_result(self, parameter_id, parameters, value):
"""Tuner receive result from trial.
Parameters
----------
parameter_id : int
parameters : dict
value : dict/float
if value is dict, it should have "default" key.
"""
value = extract_scalar_reward(value)
if self.optimize_mode == OptimizeMode.Minimize:
value = -value

logger.info("Received trial result.")
logger.info("value :%s", value)
logger.info("parameter : %s", parameters)
self._space.register(parameters, value)

def import_data(self, data):
"""Import additional data for tuning
Parameters
----------
data:
a list of dictionarys, each of which has at least two keys, 'parameter' and 'value'
"""
_completed_num = 0
for trial_info in data:
logger.info("Importing data, current processing progress %s / %s" %
(_completed_num, len(data)))
_completed_num += 1
assert "parameter" in trial_info
_params = trial_info["parameter"]
assert "value" in trial_info
_value = trial_info['value']
if not _value:
logger.info(
"Useless trial data, value is %s, skip this trial data." % _value)
continue
self.supplement_data_num += 1
_parameter_id = '_'.join(
["ImportData", str(self.supplement_data_num)])
self.receive_trial_result(
parameter_id=_parameter_id, parameters=_params, value=_value)
logger.info("Successfully import data to GP tuner.")
Loading

0 comments on commit a587648

Please sign in to comment.