Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
Merge pull request #6 from Microsoft/master
Browse files Browse the repository at this point in the history
merge master
  • Loading branch information
SparkSnail authored Sep 19, 2018
2 parents 6d09780 + cdee9c3 commit 6d669c6
Show file tree
Hide file tree
Showing 18 changed files with 277 additions and 303 deletions.
7 changes: 2 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -109,16 +109,13 @@ remote-machine-install:
cd src/sdk/pynni && python3 setup.py install $(PIP_MODE)


# All-in-one target
# All-in-one target for non-expert users
# Installs NNI as well as its dependencies, and update bashrc to set PATH
.PHONY: easy-install
easy-install: check-perm
easy-install: install-dependencies
easy-install: build
easy-install: install-python-modules
easy-install: install-node-modules
easy-install: install-scripts
easy-install: install-examples
easy-install: install
easy-install: update-bashrc
easy-install:
#$(_INFO) Complete! #(_END)
Expand Down
91 changes: 91 additions & 0 deletions README.Makefile.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Makefile and Installation Setup

NNI uses GNU make for building and installing.

The `Makefile` offers standard targets `build`, `install`, and `uninstall`, as well as alternative installation targets for different setup:

* `easy-install`: target for non-expert users, which handles everything automatically;
* `pip-install`: target in favor of `setup.py`;
* `dev-install`: target for NNI contributors, which installs NNI as symlinks instead of copying files;
* `remote-machine-install`: target that only installs core Python library for remote machine workers.

The targets will be detailed later.

## Dependencies

NNI requires at least Node.js, Yarn, and setuptools to build, while PIP and TypeScript are also recommended.

NNI requires Node.js, serve, and all dependency libraries to run.
Required Node.js libraries (including TypeScript) can be installed by Yarn, and required Python libraries can be installed by setuptools or PIP.

For NNI *users*, `make install-dependencies` can be used to install Node.js, Yarn, and serve.
This will install Node.js and serve to NNI's installation directory, and install Yarn to `/tmp/nni-yarn`.
This target requires wget to work.

For NNI *developers*, it is recommended to install Node.js, Yarn, and serve manually.
See their official sites for installation guide.

## Building NNI

Simply run `make` when dependencies are ready.

## Installation

### Directory Hierarchy

The main parts of NNI project consist of two Node.js modules (`nni_manager`, `webui`) and two Python packages (`nni`, `nnictl`).

By default the Node.js modules are installed to `/usr/share/nni` for all users or installed to `~/.local/nni` for current user.

The Python packages are installed with setuptools and therefore the location depends on Python configuration.
When install as non-priviledged user and virtualenv is not detected, `--user` flag will be used.

In addition, `nnictl` offers a bash completion scripts, which will be installed to `/usr/share/bash-completion/completions` or `~/.bash_completion.d`.

In some configuration, NNI will also install Node.js and the serve module to `/usr/share/nni`.

All directories mentioned above are configurable. See next section for details.

### Configuration

The `Makefile` uses environment variables to override default settings.

Available variables are listed below:

| Name | Description | Default for normal user | Default for root |
|--------------------|---------------------------------------------------------|-----------------------------------|-------------------------------------------------|
| `BIN_PATH` | Path for executables | `~/.local/bin` | `/usr/bin` |
| `INSTALL_PREFIX` | Path for Node.js modules (a suffix `nni` will be added) | `~/.local` | `/usr/share` |
| `EXAMPLES_PATH` | Path for NNI examples | `~/nni/examples` | `$INSTALL_PREFIX/nni/examples` |
| `BASH_COMP_SCRIPT` | Path of bash completion script | `~/.bash_completion.d/nnictl` | `/usr/share/bash-completion/completions/nnictl` |
| `PIP_MODE` | Arguments for `python3 setup.py install` | `--user` if `VIRTUAL_ENV` not set | (empty) |
| `NODE_PATH` | Path to install Node.js runtime | `$INSTALL_PREFIX/nni/node` | `$INSTALL_PREFIX/nni/node` |
| `SERVE_PATH` | Path to install serve package | `$INSTALL_PREFIX/nni/serve` | `$INSTALL_PREFIX/nni/serve` |
| `YARN_PATH` | Path to install Yarn | `/tmp/nni-yarn` | `/tmp/nni-yarn` |
| `NODE` | Node.js command | see source file | see source file |
| `SERVE` | serve command | see source file | see source file |
| `YARN` | Yarn command | see source file | see source file |

Note that these variables will influence installation destination as well as generated `nnictl` and `nnimanager` scripts.
If the path to copy files is different from where they will run (e.g. when creating a distro package), please generate `nnictl` and `nnimanager` manually.

### Targets

The workflow of each installation targets is listed below:

| Target | Workflow |
|--------------------------|----------------------------------------------------------------------|
| `install` | Install Python packages, Node.js modules, NNI scripts, and examples |
| `easy-install` | Install dependencies, build, install NNI, and edit `~/.bashrc` |
| `pip-install` | Install dependencies, build, install NNI excluding Python packages |
| `dev-install` | Install Python and Node.js modules as symlinks, then install scripts |
| `remote-machine-install` | Install `nni` Python package |

## TODO

* `clean` target
* `test` target
* `lint` target
* Exclude tuners and their dependencies from `remote-machine-install`
* Test cases for each target
* Review variables
31 changes: 20 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,31 +26,40 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
* As a researcher and data scientist, you want to implement your own AutoML algorithms and compare with other algorithms
* As a ML platform owner, you want to support AutoML in your platform

# Getting Started with NNI
# Get Started with NNI

## **Installation**
Install through python pip. (the current version only supports linux, nni on ubuntu 16.04 or newer has been well tested)
* requirements: python >= 3.5, git, wget
pip Installation Prerequisites
* linux (ubuntu 16.04 or newer version has been well tested)
* python >= 3.5
* git, wget

```
pip3 install -v --user git+https://github.com/Microsoft/nni.git@v0.1
source ~/.bashrc
```

## **Quick start: run your first experiment at local**
It only requires 3 steps to start an experiment on NNI:
![](./docs/3_steps.jpg)


NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment.

## **Quick start: run an experiment at local**
Requirements:
* NNI installed on your local machine
* tensorflow installed
**NOTE**: The following example is an experiment built on TensorFlow, make sure you have **TensorFlow installed** before running the following command.

Run the following command to create an experiment for [mnist]
Try it out:
```bash
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
nnictl create --config ~/nni/examples/trials/mnist/config.yml
```
This command will start an experiment and a WebUI. The WebUI endpoint will be shown in the output of this command (for example, `http://localhost:8080`). Open this URL in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.

In the command output, find out the **Web UI url** and open it in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.

To learn more about how this example was constructed and how to analyze the experiement results in NNI Web UI, please refer to [How to write a trial run on NNI (MNIST as an example)?](docs/WriteYourTrial.md)

## **Please refer to [Get Started Tutorial](docs/GetStarted.md) for more detailed information.**
## More tutorials
* [How to write a trial running on NNI (Mnist as an example)?](docs/WriteYourTrial.md)

* [Tutorial of NNI python annotation.](tools/nni_annotation/README.md)
* [Tuners supported by NNI.](src/sdk/pynni/nni/README.md)
* [How to enable early stop (i.e. assessor) in an experiment?](docs/EnableAssessor.md)
Expand Down
Binary file added docs/3_steps.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 4 additions & 2 deletions docs/GetStarted.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
**Getting Started with NNI**
**Get Started with NNI**
===

## **Installation**
* __Dependencies__

python >= 3.5
git
wget

python pip should also be correctly installed. You could use "which pip" or "pip -V" to check in Linux.

* Note: For now, we don't support virtual environment.
* Note: we don't support virtual environment in current releases.

* __Install NNI through pip__

Expand Down
11 changes: 6 additions & 5 deletions docs/RemoteMachineMode.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
**Run an Experiment on Multiple Machines**
===
NNI supports running an experiment on multiple machines, called remote machine mode. Let's say you have multiple machines with the account `bob` (Note: the account is not necessarily the same on multiple machines):
| IP | Username | Password |
| --------|---------|-------|
| 10.1.1.1 | bob | bob123 |
NNI supports running an experiment on multiple machines, called remote machine mode. Let's say you have multiple machines with the account `bob` (Note: the account is not necessarily the same on multiple machines):

| IP | Username| Password |
| -------- |---------|-------|
| 10.1.1.1 | bob | bob123 |
| 10.1.1.2 | bob | bob123 |
| 10.1.1.3 | bob | bob123 |

Expand Down Expand Up @@ -61,4 +62,4 @@ Simply filling the `machineList` section. This yaml file is named `exp_remote.ya
```
nnictl create --config exp_remote.yaml
```
to start the experiment. This command can be executed on one of those three machines above, and can also be executed on another machine which has NNI installed and has network accessibility to those three machines.
to start the experiment. This command can be executed on one of those three machines above, and can also be executed on another machine which has NNI installed and has network accessibility to those three machines.
98 changes: 72 additions & 26 deletions docs/WriteYourTrial.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
**Write a Trial which can Run on NNI**
**Write a Trial Run on NNI**
===
There would be only a few changes on your existing trial(model) code to make the code runnable on NNI. We provide two approaches for you to modify your code: `Python annotation` and `NNI APIs for trial`

## NNI APIs
We also support NNI APIs for trial code. By using this approach, you should first prepare a search space file. An example is shown below:
A **Trial** in NNI is an individual attempt at applying a set of parameters on a model.

To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`.

## NNI API
>Step 1 - Prepare a SearchSpace parameters file.
An example is shown below:
```
{
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
Expand All @@ -12,32 +17,71 @@ We also support NNI APIs for trial code. By using this approach, you should firs
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
You can refer to [here](SearchSpaceSpec.md) for the tutorial of search space.
Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search space.

Then, include `import nni` in your trial code to use NNI APIs. Using the line:
```
RECEIVED_PARAMS = nni.get_parameters()
```
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
```
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
```
>Step 2 - Update model codes
~~~~
2.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs.
2.2 Get predefined parameters
Use the following code snippet:
RECEIVED_PARAMS = nni.get_parameters()
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
2.3 Report NNI results
Use the API:
On the other hand, you can use the API: `nni.report_intermediate_result(accuracy)` to send `accuracy` to assessor. And use `nni.report_final_result(accuracy)` to send `accuracy` to tuner. Here `accuracy` could be any python data type, but **NOTE that if you use built-in tuner/assessor, `accuracy` should be a numerical variable(e.g. float, int)**.
`nni.report_intermediate_result(accuracy)`
to send `accuracy` to assessor.
Use the API:
The assessor will decide which trial should early stop based on the history performance of trial(intermediate result of one trial).
The tuner will generate next parameters/architecture based on the explore history(final result of all trials).
`nni.report_final_result(accuracy)`
to send `accuracy` to tuner.
~~~~

**NOTE**:
~~~~
accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
tuner - The tuner will generate next parameters/architecture based on the explore history (final result of all trials).
~~~~

>Step 3 - Enable NNI API
To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):

In the yaml configure file, you need two lines to enable NNI APIs:
```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
```

You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations.

(../examples/trials/README.md) for more information about how to write trial code using NNI APIs.

## NNI Python Annotation
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to:
* annotate the variables you want to tune
* specify in which range you want to tune the variables
* annotate which variable you want to report as intermediate result to `assessor`
* annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`.

Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation.

>Step 1 - Update codes with annotations
Please refer the following tensorflow code snippet for NNI Annotation, the highlighted 4 lines are annotations that help you to: (1) tune batch\_size and (2) dropout\_rate, (3) report test\_acc every 100 steps, and (4) at last report test\_acc as final result.

>What noteworthy is: as these new added codes are annotations, it does not actually change your previous codes logic, you can still run your code as usual in environments without NNI installed.
## NNI Annotation
We designed a new syntax for users to annotate the variables they want to tune and in what range they want to tune the variables. Also, they can annotate which variable they want to report as intermediate result to `assessor`, and which variable to report as the final result (e.g. model accuracy) to `tuner`. A really appealing feature of our NNI annotation is that it exists as comments in your code, which means you can run your code as before without NNI. Let's look at an example, below is a piece of tensorflow code:
```diff
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
Expand All @@ -64,14 +108,16 @@ with tf.Session() as sess:
+ """@nni.report_final_result(test_acc)"""
```

Let's say you want to tune batch\_size and dropout\_rate, and report test\_acc every 100 steps, at last report test\_acc as final result. With our NNI annotation, your code would look like below:
>NOTE
>>`@nni.variable` will take effect on its following line
>>
>>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
>>
>>Please refer to [Annotation README](../tools/annotation/README.md) for more information about annotation syntax and its usage.

Simply adding four lines would make your code runnable on NNI. You can still run your code independently. `@nni.variable` works on its next line assignment, and `@nni.report_intermediate_result`/`@nni.report_final_result` would send the data to assessor/tuner at that line. Please refer to [here](../tools/annotation/README.md) for more annotation syntax and more powerful usage. In the yaml configure file, you need one line to enable NNI annotation:
>Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```
useAnnotation: true
```

For users to correctly leverage NNI annotation, we briefly introduce how NNI annotation works here: NNI precompiles users' trial code to find all the annotations each of which is one line with `"""@nni` at the head of the line. Then NNI replaces each annotation with a corresponding NNI API at the location where the annotation is.

**Note that: in your trial code, you can use either one of NNI APIs and NNI annotation, but not both of them simultaneously.**
Loading

0 comments on commit 6d669c6

Please sign in to comment.