diff --git a/README.md b/README.md index e3c2670ea5..43b6689c30 100644 --- a/README.md +++ b/README.md @@ -52,32 +52,32 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search - Tuner + Tuner - Assessor + Assessor @@ -229,11 +229,11 @@ You can use these commands to get more information about the experiment ## **How to** * [Install NNI](docs/en_US/Installation.md) -* [Use command line tool nnictl](docs/en_US/NNICTLDOC.md) +* [Use command line tool nnictl](docs/en_US/Nnictl.md) * [Use NNIBoard](docs/en_US/WebUI.md) * [How to define search space](docs/en_US/SearchSpaceSpec.md) * [How to define a trial](docs/en_US/Trials.md) -* [How to choose tuner/search-algorithm](docs/en_US/Builtin_Tuner.md) +* [How to choose tuner/search-algorithm](docs/en_US/BuiltinTuner.md) * [Config an experiment](docs/en_US/ExperimentConfig.md) * [How to use annotation](docs/en_US/Trials.md#nni-python-annotation) @@ -241,12 +241,12 @@ You can use these commands to get more information about the experiment * [Run an experiment on local (with multiple GPUs)?](docs/en_US/LocalMode.md) * [Run an experiment on multiple machines?](docs/en_US/RemoteMachineMode.md) -* [Run an experiment on OpenPAI?](docs/en_US/PAIMode.md) +* [Run an experiment on OpenPAI?](docs/en_US/PaiMode.md) * [Run an experiment on Kubeflow?](docs/en_US/KubeflowMode.md) * [Try different tuners](docs/en_US/tuners.rst) * [Try different assessors](docs/en_US/assessors.rst) -* [Implement a customized tuner](docs/en_US/Customize_Tuner.md) -* [Implement a customized assessor](docs/en_US/Customize_Assessor.md) +* [Implement a customized tuner](docs/en_US/CustomizeTuner.md) +* [Implement a customized assessor](docs/en_US/CustomizeAssessor.md) * [Use Genetic Algorithm to find good model architectures for Reading Comprehension task](examples/trials/ga_squad/README.md) ## **Contribute** @@ -255,9 +255,9 @@ This project welcomes contributions and suggestions, we use [GitHub issues](http Issues with the **good first issue** label are simple and easy-to-start ones that we recommend new contributors to start with. -To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNNIDeveloperEnvironment.md) +To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNniDeveloperEnvironment.md) -Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/CONTRIBUTING.md) +Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/Contributing.md) We are in construction of the instruction for [How to Debug](docs/en_US/HowToDebug.md), you are also welcome to contribute questions or suggestions on this area. diff --git a/docs/en_US/AdvancedNAS.md b/docs/en_US/AdvancedNas.md similarity index 99% rename from docs/en_US/AdvancedNAS.md rename to docs/en_US/AdvancedNas.md index 72edcc1dd8..6e1a17c7d7 100644 --- a/docs/en_US/AdvancedNAS.md +++ b/docs/en_US/AdvancedNas.md @@ -12,7 +12,7 @@ With the NFS setup (see below), trial code can share model weight through loadin ```yaml tuner: codeDir: path/to/customer_tuner - classFileName: customer_tuner.py + classFileName: customer_tuner.py className: CustomerTuner classArgs: ... diff --git a/docs/en_US/AnnotationSpec.md b/docs/en_US/AnnotationSpec.md index 660bf6b3a7..a02fc27603 100644 --- a/docs/en_US/AnnotationSpec.md +++ b/docs/en_US/AnnotationSpec.md @@ -1,14 +1,15 @@ -# NNI Annotation +# NNI Annotation ## Overview -To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code. +To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code. Below is an example: ```python '''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)''' learning_rate = 0.1 + ``` The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line. diff --git a/docs/en_US/batchTuner.md b/docs/en_US/BatchTuner.md similarity index 100% rename from docs/en_US/batchTuner.md rename to docs/en_US/BatchTuner.md diff --git a/docs/en_US/Blog/HPOComparison.md b/docs/en_US/Blog/HpoComparison.md similarity index 100% rename from docs/en_US/Blog/HPOComparison.md rename to docs/en_US/Blog/HpoComparison.md diff --git a/docs/en_US/Blog/NASComparison.md b/docs/en_US/Blog/NasComparison.md similarity index 100% rename from docs/en_US/Blog/NASComparison.md rename to docs/en_US/Blog/NasComparison.md diff --git a/docs/en_US/Blog/index.rst b/docs/en_US/Blog/index.rst index a38ca82666..8eef1bb1f4 100644 --- a/docs/en_US/Blog/index.rst +++ b/docs/en_US/Blog/index.rst @@ -5,5 +5,5 @@ Research Blog .. toctree:: :maxdepth: 2 - Hyperparameter Optimization Comparison - Neural Architecture Search Comparison \ No newline at end of file + Hyperparameter Optimization Comparison + Neural Architecture Search Comparison diff --git a/docs/en_US/bohbAdvisor.md b/docs/en_US/BohbAdvisor.md similarity index 99% rename from docs/en_US/bohbAdvisor.md rename to docs/en_US/BohbAdvisor.md index 1ec2d7b09c..111525ab92 100644 --- a/docs/en_US/bohbAdvisor.md +++ b/docs/en_US/BohbAdvisor.md @@ -10,7 +10,7 @@ Below we divide introduction of the BOHB process into two parts: ### HB (Hyperband) -We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](hyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. +We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. ![](../img/bohb_1.png) diff --git a/docs/en_US/Builtin_Assessors.md b/docs/en_US/BuiltinAssessors.md similarity index 100% rename from docs/en_US/Builtin_Assessors.md rename to docs/en_US/BuiltinAssessors.md diff --git a/docs/en_US/Builtin_Tuner.md b/docs/en_US/BuiltinTuner.md similarity index 100% rename from docs/en_US/Builtin_Tuner.md rename to docs/en_US/BuiltinTuner.md diff --git a/docs/en_US/cifar10_examples.md b/docs/en_US/Cifar10Examples.md similarity index 100% rename from docs/en_US/cifar10_examples.md rename to docs/en_US/Cifar10Examples.md diff --git a/docs/en_US/CONTRIBUTING.md b/docs/en_US/Contributing.md similarity index 97% rename from docs/en_US/CONTRIBUTING.md rename to docs/en_US/Contributing.md index 4a6163352d..b964f3038b 100644 --- a/docs/en_US/CONTRIBUTING.md +++ b/docs/en_US/Contributing.md @@ -28,7 +28,7 @@ When raising issues, please specify the following: Provide PRs with appropriate tags for bug fixes or enhancements to the source code. Do follow the correct naming conventions and code styles when you work on and do try to implement all code reviews along the way. -If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNNIDeveloperEnvironment.md) file in the `docs` folder. +If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNniDeveloperEnvironment.md) file in the `docs` folder. Similarly for [Quick Start](QuickStart.md). For everything else, refer to [NNI Home page](http://nni.readthedocs.io). @@ -39,7 +39,7 @@ A person looking to contribute can take up an issue by claiming it as a comment/ ## Code Styles & Naming Conventions * We follow [PEP8](https://www.python.org/dev/peps/pep-0008/) for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as `flake8` or `pylint` -* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](CONTRIBUTING.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring. +* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](Contributing.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring. ## Documentation Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/master/docs). diff --git a/docs/en_US/curvefittingAssessor.md b/docs/en_US/CurvefittingAssessor.md similarity index 100% rename from docs/en_US/curvefittingAssessor.md rename to docs/en_US/CurvefittingAssessor.md diff --git a/docs/en_US/Customize_Advisor.md b/docs/en_US/CustomizeAdvisor.md similarity index 100% rename from docs/en_US/Customize_Advisor.md rename to docs/en_US/CustomizeAdvisor.md diff --git a/docs/en_US/Customize_Assessor.md b/docs/en_US/CustomizeAssessor.md similarity index 100% rename from docs/en_US/Customize_Assessor.md rename to docs/en_US/CustomizeAssessor.md diff --git a/docs/en_US/Customize_Tuner.md b/docs/en_US/CustomizeTuner.md similarity index 97% rename from docs/en_US/Customize_Tuner.md rename to docs/en_US/CustomizeTuner.md index bee72489f9..a57489a968 100644 --- a/docs/en_US/Customize_Tuner.md +++ b/docs/en_US/CustomizeTuner.md @@ -109,4 +109,4 @@ More detail example you could see: ### Write a more advanced automl algorithm -The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](Customize_Advisor.md) for how to write a customized advisor. \ No newline at end of file +The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor. \ No newline at end of file diff --git a/docs/en_US/evolutionTuner.md b/docs/en_US/EvolutionTuner.md similarity index 100% rename from docs/en_US/evolutionTuner.md rename to docs/en_US/EvolutionTuner.md diff --git a/docs/en_US/Examples.rst b/docs/en_US/Examples.rst deleted file mode 100644 index 8c605f67a3..0000000000 --- a/docs/en_US/Examples.rst +++ /dev/null @@ -1,12 +0,0 @@ -###################### -Examples -###################### - -.. toctree:: - :maxdepth: 2 - - MNIST - Cifar10 - Scikit-learn - EvolutionSQuAD - GBDT diff --git a/docs/en_US/ExperimentConfig.md b/docs/en_US/ExperimentConfig.md index 536571c1e1..892fcd1526 100644 --- a/docs/en_US/ExperimentConfig.md +++ b/docs/en_US/ExperimentConfig.md @@ -169,7 +169,7 @@ machineList: * __remote__ submit trial jobs to remote ubuntu machines, and __machineList__ field should be filed in order to set up SSH connection to remote machine. - * __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](./PAIMode.md) + * __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](./PaiMode.md) * __kubeflow__ submit trial jobs to [kubeflow](https://www.kubeflow.org/docs/about/kubeflow/), NNI support kubeflow based on normal kubernetes and [azure kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/). diff --git a/docs/en_US/FAQ.md b/docs/en_US/FAQ.md index 61f108b1c0..05756fd08b 100644 --- a/docs/en_US/FAQ.md +++ b/docs/en_US/FAQ.md @@ -2,14 +2,13 @@ This page is for frequent asked questions and answers. - ### tmp folder fulled -nnictl will use tmp folder as a temporary folder to copy files under codeDir when executing experimentation creation. +nnictl will use tmp folder as a temporary folder to copy files under codeDir when executing experimentation creation. When met errors like below, try to clean up **tmp** folder first. > OSError: [Errno 28] No space left on device ### Cannot get trials' metrics in OpenPAI mode -In OpenPAI training mode, we start a rest server which listens on 51189 port in NNI Manager to receive metrcis reported from trials running in OpenPAI cluster. If you didn't see any metrics from WebUI in OpenPAI mode, check your machine where NNI manager runs on to make sure 51189 port is turned on in the firewall rule. +In OpenPAI training mode, we start a rest server which listens on 51189 port in NNI Manager to receive metrcis reported from trials running in OpenPAI cluster. If you didn't see any metrics from WebUI in OpenPAI mode, check your machine where NNI manager runs on to make sure 51189 port is turned on in the firewall rule. ### Segmentation Fault (core dumped) when installing > make: *** [install-XXX] Segmentation fault (core dumped) @@ -19,7 +18,7 @@ Please try the following solutions in turn: * Install NNI with `--no-cache-dir` flag like `python3 -m pip install nni --no-cache-dir` ### Job management error: getIPV4Address() failed because os.networkInterfaces().eth0 is undefined. -Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig.md) in your config file manually. +Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig.md) in your config file manually. ### Exceed the MaxDuration but didn't stop When the duration of experiment reaches the maximum duration, nniManager will not create new trials, but the existing trials will continue unless user manually stop the experiment. @@ -28,7 +27,14 @@ When the duration of experiment reaches the maximum duration, nniManager will no If you upgrade your NNI or you delete some config files of NNI when there is an experiment running, this kind of issue may happen because the loss of config file. You could use `ps -ef | grep node` to find the pid of your experiment, and use `kill -9 {pid}` to kill it manually. ### Could not get `default metric` in webUI of virtual machines -Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall. +Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall. + +### Could not open webUI link +Unable to open the WebUI may have the following reasons: + +* http://127.0.0.1, http://172.17.0.1 and http://10.0.0.15 are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like http://[your_server_ip]:8080 +* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment. +* Another reason may be your experiment is failed and NNI may fail to get the experiment infomation. You can check the log of NNImanager in the following directory: ~/nni/experiment/[your_experiment_id] /log/nnimanager.log ### Windows local mode problems Please refer to [NNI Windows local mode](WindowsLocalMode.md) diff --git a/docs/en_US/FrameworkControllerMode.md b/docs/en_US/FrameworkControllerMode.md index 9d4c410786..14574f43c3 100644 --- a/docs/en_US/FrameworkControllerMode.md +++ b/docs/en_US/FrameworkControllerMode.md @@ -100,4 +100,4 @@ Trial configuration in frameworkcontroller mode have the following configuration After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on frameworkcontroller is similar to kubeflow, please refer the [document](./KubeflowMode.md) for more information. ## version check -NNI support version check feature in since version 0.6, [refer](PAIMode.md) \ No newline at end of file +NNI support version check feature in since version 0.6, [refer](PaiMode.md) \ No newline at end of file diff --git a/docs/en_US/gbdt_example.md b/docs/en_US/GbdtExample.md similarity index 100% rename from docs/en_US/gbdt_example.md rename to docs/en_US/GbdtExample.md diff --git a/docs/en_US/gridsearchTuner.md b/docs/en_US/GridsearchTuner.md similarity index 100% rename from docs/en_US/gridsearchTuner.md rename to docs/en_US/GridsearchTuner.md diff --git a/docs/en_US/HowToDebug.md b/docs/en_US/HowToDebug.md index e33cc45480..2ad14e9295 100644 --- a/docs/en_US/HowToDebug.md +++ b/docs/en_US/HowToDebug.md @@ -21,7 +21,7 @@ There are three kinds of log in NNI. When creating a new experiment, you can spe All possible errors that happen when launching an NNI experiment can be found here. -You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](NNICTLDOC.md) +You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](Nnictl.md) ### Experiment Root Directory diff --git a/docs/en_US/HowToImplementTrainingService.md b/docs/en_US/HowToImplementTrainingService.md index 582497c382..ca9b86f255 100644 --- a/docs/en_US/HowToImplementTrainingService.md +++ b/docs/en_US/HowToImplementTrainingService.md @@ -7,7 +7,7 @@ TrainingService is a module related to platform management and job schedule in N ## System architecture ![](../img/NNIDesign.jpg) -The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PAIMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md). +The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md). In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules. ## Folder structure of code @@ -146,4 +146,4 @@ When users submit a trial job to cloud platform, they should wrap their trial co ## Reference For more information about how to debug, please [refer](HowToDebug.md). -The guide line of how to contribute, please [refer](CONTRIBUTING). \ No newline at end of file +The guide line of how to contribute, please [refer](Contributing.md). \ No newline at end of file diff --git a/docs/en_US/hyperbandAdvisor.md b/docs/en_US/HyperbandAdvisor.md similarity index 100% rename from docs/en_US/hyperbandAdvisor.md rename to docs/en_US/HyperbandAdvisor.md diff --git a/docs/en_US/hyperoptTuner.md b/docs/en_US/HyperoptTuner.md similarity index 100% rename from docs/en_US/hyperoptTuner.md rename to docs/en_US/HyperoptTuner.md diff --git a/docs/en_US/Installation.md b/docs/en_US/Installation.md index 1d18a8b799..a1ddf2b274 100644 --- a/docs/en_US/Installation.md +++ b/docs/en_US/Installation.md @@ -88,12 +88,12 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is ## Further reading * [Overview](Overview.md) -* [Use command line tool nnictl](NNICTLDOC.md) +* [Use command line tool nnictl](Nnictl.md) * [Use NNIBoard](WebUI.md) * [Define search space](SearchSpaceSpec.md) * [Config an experiment](ExperimentConfig.md) * [How to run an experiment on local (with multiple GPUs)?](LocalMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) +* [How to run an experiment on OpenPAI?](PaiMode.md) * [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md) * [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md) diff --git a/docs/en_US/KubeflowMode.md b/docs/en_US/KubeflowMode.md index 44ceb7dffb..ccec0a8005 100644 --- a/docs/en_US/KubeflowMode.md +++ b/docs/en_US/KubeflowMode.md @@ -197,6 +197,6 @@ Notice: In kubeflow mode, NNIManager will start a rest server and listen on a po Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information. ## version check -NNI support version check feature in since version 0.6, [refer](PAIMode.md) +NNI support version check feature in since version 0.6, [refer](PaiMode.md) Any problems when using NNI in kubeflow mode, please create issues on [NNI Github repo](https://github.com/Microsoft/nni). diff --git a/docs/en_US/LocalMode.md b/docs/en_US/LocalMode.md index c5fce7b234..3aae6661dd 100644 --- a/docs/en_US/LocalMode.md +++ b/docs/en_US/LocalMode.md @@ -85,14 +85,14 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install This command will be filled in the YAML configure file below. Please refer to [here](Trials.md) for how to write your own trial. -**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](Customize_Tuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: +**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizeTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: tuner: builtinTunerName: TPE classArgs: optimize_mode: maximize -*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](Builtin_Tuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. +*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](BuiltinTuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: @@ -130,7 +130,7 @@ With all these steps done, we can run the experiment with the following command: nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml -You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool. +You can refer to [here](Nnictl.md) for more usage guide of *nnictl* command line tool. ## View experiment results The experiment has been running now. Other than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. diff --git a/docs/en_US/medianstopAssessor.md b/docs/en_US/MedianstopAssessor.md similarity index 100% rename from docs/en_US/medianstopAssessor.md rename to docs/en_US/MedianstopAssessor.md diff --git a/docs/en_US/metisTuner.md b/docs/en_US/MetisTuner.md similarity index 100% rename from docs/en_US/metisTuner.md rename to docs/en_US/MetisTuner.md diff --git a/docs/en_US/mnist_examples.md b/docs/en_US/MnistExamples.md similarity index 100% rename from docs/en_US/mnist_examples.md rename to docs/en_US/MnistExamples.md diff --git a/docs/en_US/multiPhase.md b/docs/en_US/MultiPhase.md similarity index 100% rename from docs/en_US/multiPhase.md rename to docs/en_US/MultiPhase.md diff --git a/docs/en_US/networkmorphismTuner.md b/docs/en_US/NetworkmorphismTuner.md similarity index 100% rename from docs/en_US/networkmorphismTuner.md rename to docs/en_US/NetworkmorphismTuner.md diff --git a/docs/en_US/NNICTLDOC.md b/docs/en_US/Nnictl.md similarity index 99% rename from docs/en_US/NNICTLDOC.md rename to docs/en_US/Nnictl.md index d9906c134e..5eaa51e25e 100644 --- a/docs/en_US/NNICTLDOC.md +++ b/docs/en_US/Nnictl.md @@ -453,7 +453,7 @@ Debug mode will disable version check function in Trialkeeper. > import data to a running experiment ```bash - nnictl experiment [experiment_id] -f experiment_data.json + nnictl experiment import [experiment_id] -f experiment_data.json ``` diff --git a/docs/en_US/Overview.md b/docs/en_US/Overview.md index 0757b9ccf2..1d47b6243e 100644 --- a/docs/en_US/Overview.md +++ b/docs/en_US/Overview.md @@ -49,11 +49,11 @@ More details about how to run an experiment, please refer to [Get Started](Quick ## Learn More * [Get started](QuickStart.md) * [How to adapt your trial code on NNI?](Trials.md) -* [What are tuners supported by NNI?](Builtin_Tuner.md) -* [How to customize your own tuner?](Customize_Tuner.md) -* [What are assessors supported by NNI?](Builtin_Assessors.md) -* [How to customize your own assessor?](Customize_Assessor.md) +* [What are tuners supported by NNI?](BuiltinTuner.md) +* [How to customize your own tuner?](CustomizeTuner.md) +* [What are assessors supported by NNI?](BuiltinAssessors.md) +* [How to customize your own assessor?](CustomizeAssessor.md) * [How to run an experiment on local?](LocalMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) -* [Examples](mnist_examples.md) \ No newline at end of file +* [How to run an experiment on OpenPAI?](PaiMode.md) +* [Examples](MnistExamples.md) \ No newline at end of file diff --git a/docs/en_US/PAIMode.md b/docs/en_US/PaiMode.md similarity index 100% rename from docs/en_US/PAIMode.md rename to docs/en_US/PaiMode.md diff --git a/docs/en_US/QuickStart.md b/docs/en_US/QuickStart.md index 4ef2651efe..9f7e929ac7 100644 --- a/docs/en_US/QuickStart.md +++ b/docs/en_US/QuickStart.md @@ -157,7 +157,7 @@ Run the **config_windows.yml** file from your command line to start MNIST experi nnictl create --config nni/examples/trials/mnist/config_windows.yml ``` -Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](NNICTLDOC.md) for more usage of `nnictl` +Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl` Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expected to get: @@ -197,7 +197,7 @@ After you start your experiment in NNI successfully, you can find a message in t The Web UI urls are: [Your IP]:8080 ``` -Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. +Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. If you can not open the WebUI link in your terminal, you can refer to [FAQ](FAQ.md). #### View summary page @@ -243,12 +243,12 @@ Below is the status of the all trials. Specifically: ## Related Topic -* [Try different Tuners](Builtin_Tuner.md) -* [Try different Assessors](Builtin_Assessors.md) -* [How to use command line tool nnictl](NNICTLDOC.md) +* [Try different Tuners](BuiltinTuner.md) +* [Try different Assessors](BuiltinAssessors.md) +* [How to use command line tool nnictl](Nnictl.md) * [How to write a trial](Trials.md) * [How to run an experiment on local (with multiple GPUs)?](LocalMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) +* [How to run an experiment on OpenPAI?](PaiMode.md) * [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md) * [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md) diff --git a/docs/en_US/RELEASE.md b/docs/en_US/Release.md similarity index 95% rename from docs/en_US/RELEASE.md rename to docs/en_US/Release.md index 273bb5b37a..84c369e4b0 100644 --- a/docs/en_US/RELEASE.md +++ b/docs/en_US/Release.md @@ -6,9 +6,9 @@ * [Support NNI on Windows](./WindowsLocalMode.md) * NNI running on windows for local mode -* [New advisor: BOHB](./bohbAdvisor.md) +* [New advisor: BOHB](./BohbAdvisor.md) * Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband -* [Support import and export experiment data through nnictl](./NNICTLDOC.md#experiment) +* [Support import and export experiment data through nnictl](./Nnictl.md#experiment) * Generate analysis results report after the experiment execution * Support import data to tuner and advisor for tuning * [Designated gpu devices for NNI trial jobs](./ExperimentConfig.md#localConfig) @@ -31,7 +31,7 @@ ### Major Features -* [Version checking](https://github.com/Microsoft/nni/blob/master/docs/en_US/PAIMode.md#version-check) +* [Version checking](https://github.com/Microsoft/nni/blob/master/docs/en_US/PaiMode.md#version-check) * check whether the version is consistent between nniManager and trialKeeper * [Report final metrics for early stop job](https://github.com/Microsoft/nni/issues/776) * If includeIntermediateResults is true, the last intermediate result of the trial that is early stopped by assessor is sent to tuner as final result. The default value of includeIntermediateResults is false. @@ -87,10 +87,10 @@ #### New tuner and assessor supports -* Support [Metis tuner](metisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. +* Support [Metis tuner](MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. * Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS. -* Support [Curve fitting assessor](curvefittingAssessor.md) for early stop policy using learning curve extrapolation. -* Advanced Support of [Weight Sharing](./AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS. +* Support [Curve fitting assessor](CurvefittingAssessor.md) for early stop policy using learning curve extrapolation. +* Advanced Support of [Weight Sharing](./AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS. #### Training Service Enhancement @@ -112,7 +112,7 @@ #### New tuner supports -* Support [network morphism](networkmorphismTuner.md) as a new tuner +* Support [network morphism](NetworkmorphismTuner.md) as a new tuner #### Training Service improvements @@ -146,8 +146,8 @@ * [Kubeflow Training service](./KubeflowMode.md) * Support tf-operator * [Distributed trial example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow -* [Grid search tuner](gridsearchTuner.md) -* [Hyperband tuner](hyperbandAdvisor.md) +* [Grid search tuner](GridsearchTuner.md) +* [Hyperband tuner](HyperbandAdvisor.md) * Support launch NNI experiment on MAC * WebUI * UI support for hyperband tuner @@ -182,7 +182,7 @@ ``` * Support updating max trial number. - use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](NNICTLDOC.md) for the fully usage of NNICTL. + use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](Nnictl.md) for the fully usage of NNICTL. ### API new features and updates @@ -227,10 +227,10 @@ ### Major Features -* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PAIMode.md) for instructions about how to submit NNI job in pai mode) +* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PaiMode.md) for instructions about how to submit NNI job in pai mode) * Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster * NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking -* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](smacTuner.md) for instructions about how to use SMAC tuner) +* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](SmacTuner.md) for instructions about how to use SMAC tuner) * [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3) * Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment * Others diff --git a/docs/en_US/RemoteMachineMode.md b/docs/en_US/RemoteMachineMode.md index 2d18dc7c71..f5e0aa3859 100644 --- a/docs/en_US/RemoteMachineMode.md +++ b/docs/en_US/RemoteMachineMode.md @@ -65,4 +65,4 @@ nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml to start the experiment. ## version check -NNI support version check feature in since version 0.6, [refer](PAIMode.md) \ No newline at end of file +NNI support version check feature in since version 0.6, [refer](PaiMode.md) \ No newline at end of file diff --git a/docs/en_US/SetupNNIDeveloperEnvironment.md b/docs/en_US/SetupNniDeveloperEnvironment.md similarity index 95% rename from docs/en_US/SetupNNIDeveloperEnvironment.md rename to docs/en_US/SetupNniDeveloperEnvironment.md index 2bd47d129a..737354abce 100644 --- a/docs/en_US/SetupNNIDeveloperEnvironment.md +++ b/docs/en_US/SetupNniDeveloperEnvironment.md @@ -63,4 +63,4 @@ After the code changes, use **step 3** to rebuild your codes, then the changes w --- At last, wish you have a wonderful day. -For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [CONTRIBUTING](./CONTRIBUTING.md) document. +For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [Contributing](./Contributing.md) document. diff --git a/docs/en_US/sklearn_examples.md b/docs/en_US/SklearnExamples.md similarity index 100% rename from docs/en_US/sklearn_examples.md rename to docs/en_US/SklearnExamples.md diff --git a/docs/en_US/smacTuner.md b/docs/en_US/SmacTuner.md similarity index 100% rename from docs/en_US/smacTuner.md rename to docs/en_US/SmacTuner.md diff --git a/docs/en_US/SQuAD_evolution_examples.md b/docs/en_US/SquadEvolutionExamples.md similarity index 100% rename from docs/en_US/SQuAD_evolution_examples.md rename to docs/en_US/SquadEvolutionExamples.md diff --git a/docs/en_US/Trials.md b/docs/en_US/Trials.md index 2bc8f27f07..382ef65024 100644 --- a/docs/en_US/Trials.md +++ b/docs/en_US/Trials.md @@ -41,14 +41,14 @@ RECEIVED_PARAMS = nni.get_next_parameter() ```python nni.report_intermediate_result(metrics) ``` -`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](Builtin_Assessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. +`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. - Report performance of the configuration ```python nni.report_final_result(metrics) ``` -`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](Builtin_Tuner.md). +`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](BuiltinTuner.md). ### Step 3 - Enable NNI API @@ -156,8 +156,8 @@ For more information, please refer to [HowToDebug](HowToDebug.md) ## More Trial Examples -* [MNIST examples](mnist_examples.md) -* [Finding out best optimizer for Cifar10 classification](cifar10_examples.md) -* [How to tune Scikit-learn on NNI](sklearn_examples.md) -* [Automatic Model Architecture Search for Reading Comprehension.](SQuAD_evolution_examples.md) -* [Tuning GBDT on NNI](gbdt_example.md) +* [MNIST examples](MnistExamples.md) +* [Finding out best optimizer for Cifar10 classification](Cifar10Examples.md) +* [How to tune Scikit-learn on NNI](SklearnExamples.md) +* [Automatic Model Architecture Search for Reading Comprehension.](SquadEvolutionExamples.md) +* [Tuning GBDT on NNI](GbdtExample.md) diff --git a/docs/en_US/advanced.rst b/docs/en_US/advanced.rst index 3362096805..2ed7dc420f 100644 --- a/docs/en_US/advanced.rst +++ b/docs/en_US/advanced.rst @@ -2,5 +2,5 @@ Advanced Features ===================== .. toctree:: - MultiPhase - AdvancedNAS \ No newline at end of file + MultiPhase + AdvancedNas \ No newline at end of file diff --git a/docs/en_US/assessors.rst b/docs/en_US/assessors.rst index b65bd49f5a..2229782280 100644 --- a/docs/en_US/assessors.rst +++ b/docs/en_US/assessors.rst @@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o .. toctree:: :maxdepth: 2 - Builtin Assessors - Customized Assessors + Builtin Assessors + Customized Assessors diff --git a/docs/en_US/builtinAssessor.rst b/docs/en_US/builtinAssessor.rst deleted file mode 100644 index 5616570794..0000000000 --- a/docs/en_US/builtinAssessor.rst +++ /dev/null @@ -1,9 +0,0 @@ -Builtin-Assessors -================= - -.. toctree:: - :maxdepth: 1 - - Overview - Medianstop - Curvefitting \ No newline at end of file diff --git a/docs/en_US/builtinTuner.rst b/docs/en_US/builtinTuner.rst deleted file mode 100644 index ad9853c97f..0000000000 --- a/docs/en_US/builtinTuner.rst +++ /dev/null @@ -1,18 +0,0 @@ -Builtin-Tuners -================== - -.. toctree:: - :maxdepth: 1 - - Overview - TPE - Random Search - Anneal - Naive Evolution - SMAC - Batch Tuner - Grid Search - Hyperband - Network Morphism - Metis Tuner - BOHB \ No newline at end of file diff --git a/docs/en_US/builtin_assessor.rst b/docs/en_US/builtin_assessor.rst new file mode 100644 index 0000000000..c59743512c --- /dev/null +++ b/docs/en_US/builtin_assessor.rst @@ -0,0 +1,9 @@ +Builtin-Assessors +================= + +.. toctree:: + :maxdepth: 1 + + Overview + Medianstop + Curvefitting \ No newline at end of file diff --git a/docs/en_US/builtin_tuner.rst b/docs/en_US/builtin_tuner.rst new file mode 100644 index 0000000000..5066d35edc --- /dev/null +++ b/docs/en_US/builtin_tuner.rst @@ -0,0 +1,18 @@ +Builtin-Tuners +================== + +.. toctree:: + :maxdepth: 1 + + Overview + TPE + Random Search + Anneal + Naive Evolution + SMAC + Batch Tuner + Grid Search + Hyperband + Network Morphism + Metis Tuner + BOHB \ No newline at end of file diff --git a/docs/en_US/Contribution.rst b/docs/en_US/contribution.rst similarity index 52% rename from docs/en_US/Contribution.rst rename to docs/en_US/contribution.rst index 9107f039d9..3e2853b74c 100644 --- a/docs/en_US/Contribution.rst +++ b/docs/en_US/contribution.rst @@ -3,5 +3,5 @@ Contribute to NNI ############################### .. toctree:: - Development Setup - Contribution Guide \ No newline at end of file + Development Setup + Contribution Guide \ No newline at end of file diff --git a/docs/en_US/examples.rst b/docs/en_US/examples.rst new file mode 100644 index 0000000000..92183d1997 --- /dev/null +++ b/docs/en_US/examples.rst @@ -0,0 +1,12 @@ +###################### +Examples +###################### + +.. toctree:: + :maxdepth: 2 + + MNIST + Cifar10 + Scikit-learn + EvolutionSQuAD + GBDT diff --git a/docs/en_US/index.rst b/docs/en_US/index.rst index c253b47c60..dc7d64a7e2 100644 --- a/docs/en_US/index.rst +++ b/docs/en_US/index.rst @@ -13,10 +13,10 @@ Contents Overview QuickStart - Tutorials - Examples - Reference + Tutorials + Examples + Reference FAQ - Contribution - Changelog + Contribution + Changelog Blog diff --git a/docs/en_US/Reference.rst b/docs/en_US/reference.rst similarity index 89% rename from docs/en_US/Reference.rst rename to docs/en_US/reference.rst index f1d82c04d5..4d502e30f7 100644 --- a/docs/en_US/Reference.rst +++ b/docs/en_US/reference.rst @@ -4,7 +4,7 @@ References .. toctree:: :maxdepth: 3 - Command Line + Command Line Python API Annotation Configuration diff --git a/docs/en_US/training_services.rst b/docs/en_US/training_services.rst index 24798675f5..1cf6dd552f 100644 --- a/docs/en_US/training_services.rst +++ b/docs/en_US/training_services.rst @@ -4,6 +4,6 @@ Introduction to NNI Training Services .. toctree:: Local Remote - OpenPAI + OpenPAI Kubeflow FrameworkController \ No newline at end of file diff --git a/docs/en_US/tuners.rst b/docs/en_US/tuners.rst index ea181f6ec2..471db7037c 100644 --- a/docs/en_US/tuners.rst +++ b/docs/en_US/tuners.rst @@ -13,6 +13,6 @@ For details, please refer to the following tutorials: .. toctree:: :maxdepth: 2 - Builtin Tuners - Customized Tuners - Customized Advisor \ No newline at end of file + Builtin Tuners + Customized Tuners + Customized Advisor \ No newline at end of file diff --git a/docs/en_US/Tutorials.rst b/docs/en_US/tutorials.rst similarity index 100% rename from docs/en_US/Tutorials.rst rename to docs/en_US/tutorials.rst diff --git a/tools/README.md b/tools/README.md index 78983fc9ea..e215e893ef 100644 --- a/tools/README.md +++ b/tools/README.md @@ -54,4 +54,4 @@ python >= 3.5 please reference to the [NNI CTL document]. -[NNI CTL document]: ../docs/en_US/NNICTLDOC.md +[NNI CTL document]: ../docs/en_US/Nnictl.md