Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
fix some document formatting and typo. (#912)
Browse files Browse the repository at this point in the history
  • Loading branch information
squirrelsc authored and yangmao99 committed Mar 26, 2019
1 parent 3640115 commit 892c9c4
Show file tree
Hide file tree
Showing 7 changed files with 20 additions and 17 deletions.
11 changes: 7 additions & 4 deletions docs/en_US/ExperimentConfig.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,10 @@ A config file is needed when create an experiment, the path of the config file i
The config file is written in YAML format, and need to be written correctly.
This document describes the rule to write config file, and will provide some examples and templates.

* [Template](#Template) (the templates of an config file)
* [Configuration spec](#Configuration) (the configuration specification of every attribute in config file)
* [Examples](#Examples) (the examples of config file)
- [Experiment config reference](#experiment-config-reference)
- [Template](#template)
- [Configuration spec](#configuration-spec)
- [Examples](#examples)

<a name="Template"></a>
## Template
Expand Down Expand Up @@ -205,6 +206,7 @@ machineList:

* __logCollection__
* Description

__logCollection__ set the way to collect log in remote, pai, kubeflow, frameworkcontroller platform. There are two ways to collect log, one way is from `http`, trial keeper will post log content back from http request in this way, but this way may slow down the speed to process logs in trialKeeper. The other way is `none`, trial keeper will not post log content back, and only post job metrics. If your log content is too big, you could consider setting this param be `none`.

* __tuner__
Expand All @@ -215,6 +217,7 @@ machineList:
* __builtinTunerName__

__builtinTunerName__ specifies the name of system tuner, NNI sdk provides four kinds of tuner, including {__TPE__, __Random__, __Anneal__, __Evolution__, __BatchTuner__, __GridSearch__}

* __classArgs__

__classArgs__ specifies the arguments of tuner algorithm. If the __builtinTunerName__ is in {__TPE__, __Random__, __Anneal__, __Evolution__}, user should set __optimize_mode__.
Expand Down Expand Up @@ -573,7 +576,7 @@ machineList:

* __remote mode__

If run trial jobs in remote machine, users could specify the remote mahcine information as fllowing format:
If run trial jobs in remote machine, users could specify the remote machine information as following format:

```yaml
authorName: test
Expand Down
6 changes: 3 additions & 3 deletions docs/en_US/KubeflowMode.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ kubeflowConfig:
## Run an experiment
Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of kubeflow. The NNI config yml file's content is like:
Use `examples/trials/mnist` as an example. This is a tensorflow job, and use tf-operator of kubeflow. The NNI config YAML file's content is like:
```
authorName: default
experimentName: example_mnist
Expand Down Expand Up @@ -119,9 +119,9 @@ kubeflowConfig:
path: {your_nfs_server_export_path}
```
Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config yml file if you want to start experiment in kubeflow mode.
Note: You should explicitly set `trainingServicePlatform: kubeflow` in NNI config YAML file if you want to start experiment in kubeflow mode.
If you want to run Pytorch jobs, you could set your config files as follow:
If you want to run PyTorch jobs, you could set your config files as follow:
```
authorName: default
experimentName: example_mnist_distributed_pytorch
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/PAIMode.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ Compared with LocalMode and [RemoteMachineMode](RemoteMachineMode.md), trial con
* outputDir
* Optional key. It specifies the HDFS output directory for trial. Once the trial is completed (either succeed or fail), trial's stdout, stderr will be copied to this directory by NNI sdk automatically. The format should be something like hdfs://{your HDFS host}:9000/{your output directory}
* virturlCluster
* Optional key. Set the virtualCluster of PAI. If omitted, the job will run on default virtual cluster.
* Optional key. Set the virtualCluster of OpenPAI. If omitted, the job will run on default virtual cluster.
* shmMB
* Optional key. Set the shmMB configuration of PAI, it set the shared memory for one task in the task role.
* Optional key. Set the shmMB configuration of OpenPAI, it set the shared memory for one task in the task role.

Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command
```
Expand Down
6 changes: 3 additions & 3 deletions docs/en_US/RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@
* Fix search space parsing error when using SMAC tuner.
* Fix cifar10 example broken pipe issue.
* Add unit test cases for nnimanager and local training service.
* Add integration test azure pipelines for remote machine, PAI and kubeflow training services.
* Support Pylon in PAI webhdfs client.
* Add integration test azure pipelines for remote machine, OpenPAI and kubeflow training services.
* Support Pylon in OpenPAI webhdfs client.


## Release 0.5.1 - 1/31/2018
Expand All @@ -28,7 +28,7 @@

### Bug Fixes and Other Changes
* Fix the bug of installation in python virtualenv, and refactor the installation logic
* Fix the bug of HDFS access failure on PAI mode after PAI is upgraded.
* Fix the bug of HDFS access failure on OpenPAI mode after OpenPAI is upgraded.
* Fix the bug that sometimes in-place flushed stdout makes experiment crash


Expand Down
6 changes: 3 additions & 3 deletions docs/en_US/cifar10_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ In this example, we have selected the following common deep learning optimizer:

#### Preparations

This example requires pytorch. Pytorch install package should be chosen based on python version and cuda version.
This example requires PyTorch. PyTorch install package should be chosen based on python version and cuda version.

Here is an example of the environment python==3.5 and cuda == 8.0, then using the following commands to install [pytorch][2]:
Here is an example of the environment python==3.5 and cuda == 8.0, then using the following commands to install [PyTorch][2]:

```bash
python3 -m pip install http://download.pytorch.org/whl/cu80/torch-0.4.1-cp35-cp35m-linux_x86_64.whl
Expand Down Expand Up @@ -81,4 +81,4 @@ nnictl create --config nni/examples/trials/cifar10_pytorch/config.yml
[6]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config.yml
[7]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/config_pai.yml
[8]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/search_space.json
[9]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/main.py
[9]: https://github.com/Microsoft/nni/blob/master/examples/trials/cifar10_pytorch/main.py
2 changes: 1 addition & 1 deletion docs/en_US/hyperbandAdvisor.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Frist, this is an example of how to write an automl algorithm based on MsgDispat
Second, this implementation fully leverages Hyperband's internal parallelism. More specifically, the next bucket is not started strictly after the current bucket, instead, it starts when there is available resource.

## 3. Usage
To use Hyperband, you should add the following spec in your experiment's yml config file:
To use Hyperband, you should add the following spec in your experiment's YAML config file:

```
advisor:
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/network_morphism/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The Network Morphism is a build-in Tuner using network morphism techniques to se

### 1. Training framework support

The network morphism now is framework-based, and we have not implemented the framework-free methods. The training frameworks which we have supported yet are Pytorch and Keras. If you get familiar with the intermediate JSON format, you can build your own model in your own training framework. In the future, we will change to intermediate format from JSON to ONNX in order to get a [standard intermediate representation spec](https://github.com/onnx/onnx/blob/master/docs/IR.md).
The network morphism now is framework-based, and we have not implemented the framework-free methods. The training frameworks which we have supported yet are PyTorch and Keras. If you get familiar with the intermediate JSON format, you can build your own model in your own training framework. In the future, we will change to intermediate format from JSON to ONNX in order to get a [standard intermediate representation spec](https://github.com/onnx/onnx/blob/master/docs/IR.md).


### 2. Install the requirements
Expand Down

0 comments on commit 892c9c4

Please sign in to comment.