Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

fix documentation warnings & 404 deadlink #1276

Merged
merged 3 commits into from
Jul 9, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en_US/BohbAdvisor.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ nnictl package install --name=BOHB

To use BOHB, you should add the following spec in your experiment's YAML config file:

```yml
```yaml
advisor:
builtinAdvisorName: BOHB
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/BuiltinTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ Similar to Hyperband, it is suggested when you have limited computation resource

**Usage example**

```yml
```yaml
advisor:
builtinAdvisorName: BOHB
classArgs:
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/CommunitySharings/HpoComparision.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ is running in docker?: no

### Problem Description

Nonconvex problem on the hyper-parameter search of [AutoGBDT](../gbdt_example.md) example.
Nonconvex problem on the hyper-parameter search of [AutoGBDT](../GbdtExample.md) example.

### Search Space

Expand Down
6 changes: 3 additions & 3 deletions docs/en_US/GeneralNasInterfaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ To illustrate the convenience of the programming interface, we use the interface

After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatically generate a search space file which could be fed into tuning algorithms. This search space file follows the following JSON format.

```json
```javascript
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

javascript?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To fix WARNING: Could not lex literal_block as "json". Highlighting skipped.

{
"mutable_1": {
"layer_1": {
Expand All @@ -67,7 +67,7 @@ After finishing the trial code through the annotation above, users have implicit

Accordingly, a specified neural architecture (generated by tuning algorithm) is expressed as follows:

```json
```javascript
{
"mutable_1": {
"layer_1": {
Expand Down Expand Up @@ -111,7 +111,7 @@ Example of weight sharing on NNI.

One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).

With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./multiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.
With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./MultiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.

![](../img/one-shot_training.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/HowToImplementTrainingService.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ TrainingService is a module related to platform management and job schedule in N
## System architecture
![](../img/NNIDesign.jpg)

The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md).
The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkControllerMode.md).

In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules.

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Nnictl.md
Original file line number Diff line number Diff line change
Expand Up @@ -463,7 +463,7 @@ Debug mode will disable version check function in Trialkeeper.

Currently, following tuner and advisor support import data:

```yml
```yaml
builtinTunerName: TPE, Anneal, GridSearch, MetisTuner
builtinAdvisorName: BOHB
```
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ More details about how to run an experiment, please refer to [Get Started](Quick
* [How to adapt your trial code on NNI?](Trials.md)
* [What are tuners supported by NNI?](BuiltinTuner.md)
* [How to customize your own tuner?](CustomizeTuner.md)
* [What are assessors supported by NNI?](BuiltinAssessors.md)
* [What are assessors supported by NNI?](BuiltinAssessor.md)
* [How to customize your own assessor?](CustomizeAssessor.md)
* [How to run an experiment on local?](LocalMode.md)
* [How to run an experiment on multiple machines?](RemoteMachineMode.md)
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ The above code can only try one set of parameters at a time, if we want to tune

NNI is born for helping user do the tuning jobs, the NNI working process is presented below:

```pseudo
```
input: search space, trial code, config file
output: one optimal hyperparameter configuration

Expand Down Expand Up @@ -240,7 +240,7 @@ Below is the status of the all trials. Specifically:
## Related Topic

* [Try different Tuners](BuiltinTuner.md)
* [Try different Assessors](BuiltinAssessors.md)
* [Try different Assessors](BuiltinAssessor.md)
* [How to use command line tool nnictl](Nnictl.md)
* [How to write a trial](Trials.md)
* [How to run an experiment on local (with multiple GPUs)?](LocalMode.md)
Expand Down
10 changes: 5 additions & 5 deletions docs/en_US/Release.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,12 @@
* Added multiphase capability for the following builtin tuners:
* TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner.

For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md#write-a-tuner-that-leverages-multi-phase)
For details, please refer to [Write a tuner that leverages multi-phase](./MultiPhase.md)

* Web Portal
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md#view-trials-status)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md#view-summary-page)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md#view-trials-status)
* Enable trial comparation in Web Portal. For details, refer to [View trials status](WebUI.md)
* Allow users to adjust rendering interval of Web Portal. For details, refer to [View Summary Page](WebUI.md)
* show intermediate results more friendly. For details, refer to [View trials status](WebUI.md)
* [Commandline Interface](Nnictl.md)
* `nnictl experiment delete`: delete one or all experiments, it includes log, result, environment information and cache. It uses to delete useless experiment result, or save disk space.
* `nnictl platform clean`: It uses to clean up disk on a target platform. The provided YAML file includes the information of target platform, and it follows the same schema as the NNI configuration file.
Expand Down Expand Up @@ -68,7 +68,7 @@

### Major Features

* [Support NNI on Windows](./WindowsLocalMode.md)
* [Support NNI on Windows](./NniOnWindows.md)
* NNI running on windows for local mode
* [New advisor: BOHB](./BohbAdvisor.md)
* Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Trials.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ RECEIVED_PARAMS = nni.get_next_parameter()
nni.report_intermediate_result(metrics)
```

`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy.
`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessor.md). Usually, `metrics` could be periodically evaluated loss or accuracy.

- Report performance of the configuration

Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/community_sharings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ In addtion to the official tutorilas and examples, we encourage community contri
:maxdepth: 2

NNI Practice Sharing<nni_practice_sharing>
Neural Architecture Search Comparison<CommunitySharings/NasComparison>
Hyper-parameter Tuning Algorithm Comparsion<CommunitySharings/HpoComparison>
Neural Architecture Search Comparison<./CommunitySharings/NasComparison>
Hyper-parameter Tuning Algorithm Comparsion<./CommunitySharings/HpoComparison>
6 changes: 5 additions & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,8 @@ json_tricks
numpy
scipy
coverage
sklearn
sklearn
git+https://github.com/QuanluZhang/ConfigSpace.git
git+https://github.com/QuanluZhang/SMAC3.git
ConfigSpace==0.4.7
statsmodels==0.9.0
1 change: 1 addition & 0 deletions docs/src/configspace
Submodule configspace added at f389e1
5 changes: 5 additions & 0 deletions docs/src/pip-delete-this-directory.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
This file is placed here by pip to indicate the source was put
here by pip.

Once this package is successfully installed this source code will be
deleted (unless you remove this file).
47 changes: 24 additions & 23 deletions src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py
Original file line number Diff line number Diff line change
Expand Up @@ -259,33 +259,34 @@ class BOHB(MsgDispatcherBase):
optimize_mode: str
optimize mode, 'maximize' or 'minimize'
min_budget: float
The smallest budget to consider. Needs to be positive!
max_budget: float
The largest budget to consider. Needs to be larger than min_budget!
The budgets will be geometrically distributed
:math:`a^2 + b^2 = c^2 \sim \eta^k` for :math:`k\in [0, 1, ... , num\_subsets - 1]`.
The smallest budget to consider. Needs to be positive!
max_budget: float
The largest budget to consider. Needs to be larger than min_budget!
The budgets will be geometrically distributed
:math:`a^2 + b^2 = c^2 \\sim \\eta^k` for :math:`k\\in [0, 1, ... , num\\_subsets - 1]`.
eta: int
In each iteration, a complete run of sequential halving is executed. In it,
after evaluating each configuration on the same subset size, only a fraction of
1/eta of them 'advances' to the next round.
Must be greater or equal to 2.
In each iteration, a complete run of sequential halving is executed. In it,
after evaluating each configuration on the same subset size, only a fraction of
1/eta of them 'advances' to the next round.
Must be greater or equal to 2.
min_points_in_model: int
number of observations to start building a KDE. Default 'None' means
dim+1, the bare minimum.
number of observations to start building a KDE. Default 'None' means
dim+1, the bare minimum.
top_n_percent: int
percentage ( between 1 and 99, default 15) of the observations that are considered good.
num_samples: int
number of samples to optimize EI (default 64)
random_fraction: float
fraction of purely random configurations that are sampled from the
prior without the model.
bandwidth_factor: float
to encourage diversity, the points proposed to optimize EI, are sampled
from a 'widened' KDE where the bandwidth is multiplied by this factor (default: 3)
min_bandwidth: float
to keep diversity, even when all (good) samples have the same value for one of the parameters,
a minimum bandwidth (Default: 1e-3) is used instead of zero.
percentage ( between 1 and 99, default 15) of the observations that are considered good.
num_samples: int
number of samples to optimize EI (default 64)
random_fraction: float
fraction of purely random configurations that are sampled from the
prior without the model.
bandwidth_factor: float
to encourage diversity, the points proposed to optimize EI, are sampled
from a 'widened' KDE where the bandwidth is multiplied by this factor (default: 3)
min_bandwidth: float
to keep diversity, even when all (good) samples have the same value for one of the parameters,
a minimum bandwidth (Default: 1e-3) is used instead of zero.
"""

def __init__(self,
optimize_mode='maximize',
min_budget=1,
Expand Down