diff --git a/README.md b/README.md index 420563e26e..98eea29d3d 100644 --- a/README.md +++ b/README.md @@ -52,32 +52,32 @@ The tool dispatches and runs trial jobs generated by tuning algorithms to search - Tuner + Tuner - Assessor + Assessor @@ -229,11 +229,11 @@ You can use these commands to get more information about the experiment ## **How to** * [Install NNI](docs/en_US/Installation.md) -* [Use command line tool nnictl](docs/en_US/NNICTLDOC.md) +* [Use command line tool nnictl](docs/en_US/Nnictl.md) * [Use NNIBoard](docs/en_US/WebUI.md) * [How to define search space](docs/en_US/SearchSpaceSpec.md) * [How to define a trial](docs/en_US/Trials.md) -* [How to choose tuner/search-algorithm](docs/en_US/Builtin_Tuner.md) +* [How to choose tuner/search-algorithm](docs/en_US/BuiltinTuner.md) * [Config an experiment](docs/en_US/ExperimentConfig.md) * [How to use annotation](docs/en_US/Trials.md#nni-python-annotation) @@ -241,12 +241,12 @@ You can use these commands to get more information about the experiment * [Run an experiment on local (with multiple GPUs)?](docs/en_US/LocalMode.md) * [Run an experiment on multiple machines?](docs/en_US/RemoteMachineMode.md) -* [Run an experiment on OpenPAI?](docs/en_US/PAIMode.md) +* [Run an experiment on OpenPAI?](docs/en_US/PaiMode.md) * [Run an experiment on Kubeflow?](docs/en_US/KubeflowMode.md) * [Try different tuners](docs/en_US/tuners.rst) * [Try different assessors](docs/en_US/assessors.rst) -* [Implement a customized tuner](docs/en_US/Customize_Tuner.md) -* [Implement a customized assessor](docs/en_US/Customize_Assessor.md) +* [Implement a customized tuner](docs/en_US/CustomizeTuner.md) +* [Implement a customized assessor](docs/en_US/CustomizeAssessor.md) * [Use Genetic Algorithm to find good model architectures for Reading Comprehension task](examples/trials/ga_squad/README.md) ## **Contribute** @@ -255,9 +255,9 @@ This project welcomes contributions and suggestions, we use [GitHub issues](http Issues with the **good first issue** label are simple and easy-to-start ones that we recommend new contributors to start with. -To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNNIDeveloperEnvironment.md) +To set up environment for NNI development, refer to the instruction: [Set up NNI developer environment](docs/en_US/SetupNniDeveloperEnvironment.md) -Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/CONTRIBUTING.md) +Before start coding, review and get familiar with the NNI Code Contribution Guideline: [Contributing](docs/en_US/Contributing.md) We are in construction of the instruction for [How to Debug](docs/en_US/HowToDebug.md), you are also welcome to contribute questions or suggestions on this area. diff --git a/docs/en_US/AdvancedNAS.md b/docs/en_US/AdvancedNas.md similarity index 99% rename from docs/en_US/AdvancedNAS.md rename to docs/en_US/AdvancedNas.md index 72edcc1dd8..6e1a17c7d7 100644 --- a/docs/en_US/AdvancedNAS.md +++ b/docs/en_US/AdvancedNas.md @@ -12,7 +12,7 @@ With the NFS setup (see below), trial code can share model weight through loadin ```yaml tuner: codeDir: path/to/customer_tuner - classFileName: customer_tuner.py + classFileName: customer_tuner.py className: CustomerTuner classArgs: ... diff --git a/docs/en_US/AnnotationSpec.md b/docs/en_US/AnnotationSpec.md index 660bf6b3a7..a02fc27603 100644 --- a/docs/en_US/AnnotationSpec.md +++ b/docs/en_US/AnnotationSpec.md @@ -1,14 +1,15 @@ -# NNI Annotation +# NNI Annotation ## Overview -To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code. +To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code. Below is an example: ```python '''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)''' learning_rate = 0.1 + ``` The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line. diff --git a/docs/en_US/batchTuner.md b/docs/en_US/BatchTuner.md similarity index 100% rename from docs/en_US/batchTuner.md rename to docs/en_US/BatchTuner.md diff --git a/docs/en_US/Blog/HPOComparison.md b/docs/en_US/Blog/HpoComparison.md similarity index 100% rename from docs/en_US/Blog/HPOComparison.md rename to docs/en_US/Blog/HpoComparison.md diff --git a/docs/en_US/Blog/NASComparison.md b/docs/en_US/Blog/NasComparison.md similarity index 100% rename from docs/en_US/Blog/NASComparison.md rename to docs/en_US/Blog/NasComparison.md diff --git a/docs/en_US/Blog/index.rst b/docs/en_US/Blog/index.rst index a38ca82666..8eef1bb1f4 100644 --- a/docs/en_US/Blog/index.rst +++ b/docs/en_US/Blog/index.rst @@ -5,5 +5,5 @@ Research Blog .. toctree:: :maxdepth: 2 - Hyperparameter Optimization Comparison - Neural Architecture Search Comparison \ No newline at end of file + Hyperparameter Optimization Comparison + Neural Architecture Search Comparison diff --git a/docs/en_US/bohbAdvisor.md b/docs/en_US/BohbAdvisor.md similarity index 99% rename from docs/en_US/bohbAdvisor.md rename to docs/en_US/BohbAdvisor.md index 1ec2d7b09c..111525ab92 100644 --- a/docs/en_US/bohbAdvisor.md +++ b/docs/en_US/BohbAdvisor.md @@ -10,7 +10,7 @@ Below we divide introduction of the BOHB process into two parts: ### HB (Hyperband) -We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](hyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. +We follow Hyperband’s way of choosing the budgets and continue to use SuccessiveHalving, for more details, you can refer to the [Hyperband in NNI](HyperbandAdvisor.md) and [reference paper of Hyperband](https://arxiv.org/abs/1603.06560). This procedure is summarized by the pseudocode below. ![](../img/bohb_1.png) diff --git a/docs/en_US/Builtin_Assessors.md b/docs/en_US/BuiltinAssessors.md similarity index 100% rename from docs/en_US/Builtin_Assessors.md rename to docs/en_US/BuiltinAssessors.md diff --git a/docs/en_US/Builtin_Tuner.md b/docs/en_US/BuiltinTuner.md similarity index 100% rename from docs/en_US/Builtin_Tuner.md rename to docs/en_US/BuiltinTuner.md diff --git a/docs/en_US/cifar10_examples.md b/docs/en_US/Cifar10Examples.md similarity index 100% rename from docs/en_US/cifar10_examples.md rename to docs/en_US/Cifar10Examples.md diff --git a/docs/en_US/CONTRIBUTING.md b/docs/en_US/Contributing.md similarity index 97% rename from docs/en_US/CONTRIBUTING.md rename to docs/en_US/Contributing.md index 4a6163352d..b964f3038b 100644 --- a/docs/en_US/CONTRIBUTING.md +++ b/docs/en_US/Contributing.md @@ -28,7 +28,7 @@ When raising issues, please specify the following: Provide PRs with appropriate tags for bug fixes or enhancements to the source code. Do follow the correct naming conventions and code styles when you work on and do try to implement all code reviews along the way. -If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNNIDeveloperEnvironment.md) file in the `docs` folder. +If you are looking for How to develop and debug the NNI source code, you can refer to [How to set up NNI developer environment doc](./SetupNniDeveloperEnvironment.md) file in the `docs` folder. Similarly for [Quick Start](QuickStart.md). For everything else, refer to [NNI Home page](http://nni.readthedocs.io). @@ -39,7 +39,7 @@ A person looking to contribute can take up an issue by claiming it as a comment/ ## Code Styles & Naming Conventions * We follow [PEP8](https://www.python.org/dev/peps/pep-0008/) for Python code and naming conventions, do try to adhere to the same when making a pull request or making a change. One can also take the help of linters such as `flake8` or `pylint` -* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](CONTRIBUTING.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring. +* We also follow [NumPy Docstring Style](https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy) for Python Docstring Conventions. During the [documentation building](Contributing.md#documentation), we use [sphinx.ext.napoleon](https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate Python API documentation from Docstring. ## Documentation Our documentation is built with [sphinx](http://sphinx-doc.org/), supporting [Markdown](https://guides.github.com/features/mastering-markdown/) and [reStructuredText](http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html) format. All our documentations are placed under [docs/en_US](https://github.com/Microsoft/nni/tree/master/docs). diff --git a/docs/en_US/curvefittingAssessor.md b/docs/en_US/CurvefittingAssessor.md similarity index 100% rename from docs/en_US/curvefittingAssessor.md rename to docs/en_US/CurvefittingAssessor.md diff --git a/docs/en_US/Customize_Advisor.md b/docs/en_US/CustomizeAdvisor.md similarity index 100% rename from docs/en_US/Customize_Advisor.md rename to docs/en_US/CustomizeAdvisor.md diff --git a/docs/en_US/Customize_Assessor.md b/docs/en_US/CustomizeAssessor.md similarity index 100% rename from docs/en_US/Customize_Assessor.md rename to docs/en_US/CustomizeAssessor.md diff --git a/docs/en_US/Customize_Tuner.md b/docs/en_US/CustomizeTuner.md similarity index 97% rename from docs/en_US/Customize_Tuner.md rename to docs/en_US/CustomizeTuner.md index bee72489f9..a57489a968 100644 --- a/docs/en_US/Customize_Tuner.md +++ b/docs/en_US/CustomizeTuner.md @@ -109,4 +109,4 @@ More detail example you could see: ### Write a more advanced automl algorithm -The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](Customize_Advisor.md) for how to write a customized advisor. \ No newline at end of file +The methods above are usually enough to write a general tuner. However, users may also want more methods, for example, intermediate results, trials' state (e.g., the methods in assessor), in order to have a more powerful automl algorithm. Therefore, we have another concept called `advisor` which directly inherits from `MsgDispatcherBase` in [`src/sdk/pynni/nni/msg_dispatcher_base.py`](https://github.com/Microsoft/nni/tree/master/src/sdk/pynni/nni/msg_dispatcher_base.py). Please refer to [here](CustomizeAdvisor.md) for how to write a customized advisor. \ No newline at end of file diff --git a/docs/en_US/evolutionTuner.md b/docs/en_US/EvolutionTuner.md similarity index 100% rename from docs/en_US/evolutionTuner.md rename to docs/en_US/EvolutionTuner.md diff --git a/docs/en_US/Examples.rst b/docs/en_US/Examples.rst deleted file mode 100644 index 8c605f67a3..0000000000 --- a/docs/en_US/Examples.rst +++ /dev/null @@ -1,12 +0,0 @@ -###################### -Examples -###################### - -.. toctree:: - :maxdepth: 2 - - MNIST - Cifar10 - Scikit-learn - EvolutionSQuAD - GBDT diff --git a/docs/en_US/ExperimentConfig.md b/docs/en_US/ExperimentConfig.md index 536571c1e1..892fcd1526 100644 --- a/docs/en_US/ExperimentConfig.md +++ b/docs/en_US/ExperimentConfig.md @@ -169,7 +169,7 @@ machineList: * __remote__ submit trial jobs to remote ubuntu machines, and __machineList__ field should be filed in order to set up SSH connection to remote machine. - * __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](./PAIMode.md) + * __pai__ submit trial jobs to [OpenPai](https://github.com/Microsoft/pai) of Microsoft. For more details of pai configuration, please reference [PAIMOdeDoc](./PaiMode.md) * __kubeflow__ submit trial jobs to [kubeflow](https://www.kubeflow.org/docs/about/kubeflow/), NNI support kubeflow based on normal kubernetes and [azure kubernetes](https://azure.microsoft.com/en-us/services/kubernetes-service/). diff --git a/docs/en_US/FAQ.md b/docs/en_US/FAQ.md index 61f108b1c0..05756fd08b 100644 --- a/docs/en_US/FAQ.md +++ b/docs/en_US/FAQ.md @@ -2,14 +2,13 @@ This page is for frequent asked questions and answers. - ### tmp folder fulled -nnictl will use tmp folder as a temporary folder to copy files under codeDir when executing experimentation creation. +nnictl will use tmp folder as a temporary folder to copy files under codeDir when executing experimentation creation. When met errors like below, try to clean up **tmp** folder first. > OSError: [Errno 28] No space left on device ### Cannot get trials' metrics in OpenPAI mode -In OpenPAI training mode, we start a rest server which listens on 51189 port in NNI Manager to receive metrcis reported from trials running in OpenPAI cluster. If you didn't see any metrics from WebUI in OpenPAI mode, check your machine where NNI manager runs on to make sure 51189 port is turned on in the firewall rule. +In OpenPAI training mode, we start a rest server which listens on 51189 port in NNI Manager to receive metrcis reported from trials running in OpenPAI cluster. If you didn't see any metrics from WebUI in OpenPAI mode, check your machine where NNI manager runs on to make sure 51189 port is turned on in the firewall rule. ### Segmentation Fault (core dumped) when installing > make: *** [install-XXX] Segmentation fault (core dumped) @@ -19,7 +18,7 @@ Please try the following solutions in turn: * Install NNI with `--no-cache-dir` flag like `python3 -m pip install nni --no-cache-dir` ### Job management error: getIPV4Address() failed because os.networkInterfaces().eth0 is undefined. -Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig.md) in your config file manually. +Your machine don't have eth0 device, please set [nniManagerIp](ExperimentConfig.md) in your config file manually. ### Exceed the MaxDuration but didn't stop When the duration of experiment reaches the maximum duration, nniManager will not create new trials, but the existing trials will continue unless user manually stop the experiment. @@ -28,7 +27,14 @@ When the duration of experiment reaches the maximum duration, nniManager will no If you upgrade your NNI or you delete some config files of NNI when there is an experiment running, this kind of issue may happen because the loss of config file. You could use `ps -ef | grep node` to find the pid of your experiment, and use `kill -9 {pid}` to kill it manually. ### Could not get `default metric` in webUI of virtual machines -Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall. +Config the network mode to bridge mode or other mode that could make virtual machine's host accessible from external machine, and make sure the port of virtual machine is not forbidden by firewall. + +### Could not open webUI link +Unable to open the WebUI may have the following reasons: + +* http://127.0.0.1, http://172.17.0.1 and http://10.0.0.15 are referred to localhost, if you start your experiment on the server or remote machine. You can replace the IP to your server IP to view the WebUI, like http://[your_server_ip]:8080 +* If you still can't see the WebUI after you use the server IP, you can check the proxy and the firewall of your machine. Or use the browser on the machine where you start your NNI experiment. +* Another reason may be your experiment is failed and NNI may fail to get the experiment infomation. You can check the log of NNImanager in the following directory: ~/nni/experiment/[your_experiment_id] /log/nnimanager.log ### Windows local mode problems Please refer to [NNI Windows local mode](WindowsLocalMode.md) diff --git a/docs/en_US/FrameworkControllerMode.md b/docs/en_US/FrameworkControllerMode.md index 9d4c410786..14574f43c3 100644 --- a/docs/en_US/FrameworkControllerMode.md +++ b/docs/en_US/FrameworkControllerMode.md @@ -100,4 +100,4 @@ Trial configuration in frameworkcontroller mode have the following configuration After you prepare a config file, you could run your experiment by nnictl. The way to start an experiment on frameworkcontroller is similar to kubeflow, please refer the [document](./KubeflowMode.md) for more information. ## version check -NNI support version check feature in since version 0.6, [refer](PAIMode.md) \ No newline at end of file +NNI support version check feature in since version 0.6, [refer](PaiMode.md) \ No newline at end of file diff --git a/docs/en_US/gbdt_example.md b/docs/en_US/GbdtExample.md similarity index 100% rename from docs/en_US/gbdt_example.md rename to docs/en_US/GbdtExample.md diff --git a/docs/en_US/gridsearchTuner.md b/docs/en_US/GridsearchTuner.md similarity index 100% rename from docs/en_US/gridsearchTuner.md rename to docs/en_US/GridsearchTuner.md diff --git a/docs/en_US/HowToDebug.md b/docs/en_US/HowToDebug.md index e33cc45480..2ad14e9295 100644 --- a/docs/en_US/HowToDebug.md +++ b/docs/en_US/HowToDebug.md @@ -21,7 +21,7 @@ There are three kinds of log in NNI. When creating a new experiment, you can spe All possible errors that happen when launching an NNI experiment can be found here. -You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](NNICTLDOC.md) +You can use `nnictl log stderr` to find error information. For more options please refer to [NNICTL](Nnictl.md) ### Experiment Root Directory diff --git a/docs/en_US/HowToImplementTrainingService.md b/docs/en_US/HowToImplementTrainingService.md index 582497c382..ca9b86f255 100644 --- a/docs/en_US/HowToImplementTrainingService.md +++ b/docs/en_US/HowToImplementTrainingService.md @@ -7,7 +7,7 @@ TrainingService is a module related to platform management and job schedule in N ## System architecture ![](../img/NNIDesign.jpg) -The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PAIMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md). +The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports local platfrom, [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkController.md). In this document, we introduce the brief design of TrainingService. If users want to add a new TrainingService instance, they just need to complete a child class to implement TrainingService, don't need to understand the code detail of NNIManager, Dispatcher or other modules. ## Folder structure of code @@ -146,4 +146,4 @@ When users submit a trial job to cloud platform, they should wrap their trial co ## Reference For more information about how to debug, please [refer](HowToDebug.md). -The guide line of how to contribute, please [refer](CONTRIBUTING). \ No newline at end of file +The guide line of how to contribute, please [refer](Contributing.md). \ No newline at end of file diff --git a/docs/en_US/hyperbandAdvisor.md b/docs/en_US/HyperbandAdvisor.md similarity index 100% rename from docs/en_US/hyperbandAdvisor.md rename to docs/en_US/HyperbandAdvisor.md diff --git a/docs/en_US/hyperoptTuner.md b/docs/en_US/HyperoptTuner.md similarity index 100% rename from docs/en_US/hyperoptTuner.md rename to docs/en_US/HyperoptTuner.md diff --git a/docs/en_US/Installation.md b/docs/en_US/Installation.md index ec29c50ce5..91156481b8 100644 --- a/docs/en_US/Installation.md +++ b/docs/en_US/Installation.md @@ -94,12 +94,12 @@ Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is ## Further reading * [Overview](Overview.md) -* [Use command line tool nnictl](NNICTLDOC.md) +* [Use command line tool nnictl](Nnictl.md) * [Use NNIBoard](WebUI.md) * [Define search space](SearchSpaceSpec.md) * [Config an experiment](ExperimentConfig.md) * [How to run an experiment on local (with multiple GPUs)?](LocalMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) +* [How to run an experiment on OpenPAI?](PaiMode.md) * [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md) * [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md) diff --git a/docs/en_US/KubeflowMode.md b/docs/en_US/KubeflowMode.md index 44ceb7dffb..ccec0a8005 100644 --- a/docs/en_US/KubeflowMode.md +++ b/docs/en_US/KubeflowMode.md @@ -197,6 +197,6 @@ Notice: In kubeflow mode, NNIManager will start a rest server and listen on a po Once a trial job is completed, you can goto NNI WebUI's overview page (like http://localhost:8080/oview) to check trial's information. ## version check -NNI support version check feature in since version 0.6, [refer](PAIMode.md) +NNI support version check feature in since version 0.6, [refer](PaiMode.md) Any problems when using NNI in kubeflow mode, please create issues on [NNI Github repo](https://github.com/Microsoft/nni). diff --git a/docs/en_US/LocalMode.md b/docs/en_US/LocalMode.md index c5fce7b234..3aae6661dd 100644 --- a/docs/en_US/LocalMode.md +++ b/docs/en_US/LocalMode.md @@ -85,14 +85,14 @@ Let's use a simple trial example, e.g. mnist, provided by NNI. After you install This command will be filled in the YAML configure file below. Please refer to [here](Trials.md) for how to write your own trial. -**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](Customize_Tuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: +**Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](CustomizeTuner.md)), but for simplicity, here we choose a tuner provided by NNI as below: tuner: builtinTunerName: TPE classArgs: optimize_mode: maximize -*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](Builtin_Tuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. +*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here](BuiltinTuner.md)), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the YAML configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: @@ -130,7 +130,7 @@ With all these steps done, we can run the experiment with the following command: nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml -You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command line tool. +You can refer to [here](Nnictl.md) for more usage guide of *nnictl* command line tool. ## View experiment results The experiment has been running now. Other than *nnictl*, NNI also provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. diff --git a/docs/en_US/medianstopAssessor.md b/docs/en_US/MedianstopAssessor.md similarity index 100% rename from docs/en_US/medianstopAssessor.md rename to docs/en_US/MedianstopAssessor.md diff --git a/docs/en_US/metisTuner.md b/docs/en_US/MetisTuner.md similarity index 100% rename from docs/en_US/metisTuner.md rename to docs/en_US/MetisTuner.md diff --git a/docs/en_US/mnist_examples.md b/docs/en_US/MnistExamples.md similarity index 100% rename from docs/en_US/mnist_examples.md rename to docs/en_US/MnistExamples.md diff --git a/docs/en_US/multiPhase.md b/docs/en_US/MultiPhase.md similarity index 100% rename from docs/en_US/multiPhase.md rename to docs/en_US/MultiPhase.md diff --git a/docs/en_US/networkmorphismTuner.md b/docs/en_US/NetworkmorphismTuner.md similarity index 100% rename from docs/en_US/networkmorphismTuner.md rename to docs/en_US/NetworkmorphismTuner.md diff --git a/docs/en_US/NNICTLDOC.md b/docs/en_US/Nnictl.md similarity index 99% rename from docs/en_US/NNICTLDOC.md rename to docs/en_US/Nnictl.md index d9906c134e..5eaa51e25e 100644 --- a/docs/en_US/NNICTLDOC.md +++ b/docs/en_US/Nnictl.md @@ -453,7 +453,7 @@ Debug mode will disable version check function in Trialkeeper. > import data to a running experiment ```bash - nnictl experiment [experiment_id] -f experiment_data.json + nnictl experiment import [experiment_id] -f experiment_data.json ``` diff --git a/docs/en_US/Overview.md b/docs/en_US/Overview.md index 0757b9ccf2..1d47b6243e 100644 --- a/docs/en_US/Overview.md +++ b/docs/en_US/Overview.md @@ -49,11 +49,11 @@ More details about how to run an experiment, please refer to [Get Started](Quick ## Learn More * [Get started](QuickStart.md) * [How to adapt your trial code on NNI?](Trials.md) -* [What are tuners supported by NNI?](Builtin_Tuner.md) -* [How to customize your own tuner?](Customize_Tuner.md) -* [What are assessors supported by NNI?](Builtin_Assessors.md) -* [How to customize your own assessor?](Customize_Assessor.md) +* [What are tuners supported by NNI?](BuiltinTuner.md) +* [How to customize your own tuner?](CustomizeTuner.md) +* [What are assessors supported by NNI?](BuiltinAssessors.md) +* [How to customize your own assessor?](CustomizeAssessor.md) * [How to run an experiment on local?](LocalMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) -* [Examples](mnist_examples.md) \ No newline at end of file +* [How to run an experiment on OpenPAI?](PaiMode.md) +* [Examples](MnistExamples.md) \ No newline at end of file diff --git a/docs/en_US/PAIMode.md b/docs/en_US/PaiMode.md similarity index 100% rename from docs/en_US/PAIMode.md rename to docs/en_US/PaiMode.md diff --git a/docs/en_US/QuickStart.md b/docs/en_US/QuickStart.md index 4ef2651efe..9f7e929ac7 100644 --- a/docs/en_US/QuickStart.md +++ b/docs/en_US/QuickStart.md @@ -157,7 +157,7 @@ Run the **config_windows.yml** file from your command line to start MNIST experi nnictl create --config nni/examples/trials/mnist/config_windows.yml ``` -Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](NNICTLDOC.md) for more usage of `nnictl` +Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl` Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expected to get: @@ -197,7 +197,7 @@ After you start your experiment in NNI successfully, you can find a message in t The Web UI urls are: [Your IP]:8080 ``` -Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. +Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. If you can not open the WebUI link in your terminal, you can refer to [FAQ](FAQ.md). #### View summary page @@ -243,12 +243,12 @@ Below is the status of the all trials. Specifically: ## Related Topic -* [Try different Tuners](Builtin_Tuner.md) -* [Try different Assessors](Builtin_Assessors.md) -* [How to use command line tool nnictl](NNICTLDOC.md) +* [Try different Tuners](BuiltinTuner.md) +* [Try different Assessors](BuiltinAssessors.md) +* [How to use command line tool nnictl](Nnictl.md) * [How to write a trial](Trials.md) * [How to run an experiment on local (with multiple GPUs)?](LocalMode.md) * [How to run an experiment on multiple machines?](RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) +* [How to run an experiment on OpenPAI?](PaiMode.md) * [How to run an experiment on Kubernetes through Kubeflow?](KubeflowMode.md) * [How to run an experiment on Kubernetes through FrameworkController?](FrameworkControllerMode.md) diff --git a/docs/en_US/RELEASE.md b/docs/en_US/Release.md similarity index 95% rename from docs/en_US/RELEASE.md rename to docs/en_US/Release.md index 273bb5b37a..84c369e4b0 100644 --- a/docs/en_US/RELEASE.md +++ b/docs/en_US/Release.md @@ -6,9 +6,9 @@ * [Support NNI on Windows](./WindowsLocalMode.md) * NNI running on windows for local mode -* [New advisor: BOHB](./bohbAdvisor.md) +* [New advisor: BOHB](./BohbAdvisor.md) * Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband -* [Support import and export experiment data through nnictl](./NNICTLDOC.md#experiment) +* [Support import and export experiment data through nnictl](./Nnictl.md#experiment) * Generate analysis results report after the experiment execution * Support import data to tuner and advisor for tuning * [Designated gpu devices for NNI trial jobs](./ExperimentConfig.md#localConfig) @@ -31,7 +31,7 @@ ### Major Features -* [Version checking](https://github.com/Microsoft/nni/blob/master/docs/en_US/PAIMode.md#version-check) +* [Version checking](https://github.com/Microsoft/nni/blob/master/docs/en_US/PaiMode.md#version-check) * check whether the version is consistent between nniManager and trialKeeper * [Report final metrics for early stop job](https://github.com/Microsoft/nni/issues/776) * If includeIntermediateResults is true, the last intermediate result of the trial that is early stopped by assessor is sent to tuner as final result. The default value of includeIntermediateResults is false. @@ -87,10 +87,10 @@ #### New tuner and assessor supports -* Support [Metis tuner](metisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. +* Support [Metis tuner](MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning. * Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS. -* Support [Curve fitting assessor](curvefittingAssessor.md) for early stop policy using learning curve extrapolation. -* Advanced Support of [Weight Sharing](./AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS. +* Support [Curve fitting assessor](CurvefittingAssessor.md) for early stop policy using learning curve extrapolation. +* Advanced Support of [Weight Sharing](./AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS. #### Training Service Enhancement @@ -112,7 +112,7 @@ #### New tuner supports -* Support [network morphism](networkmorphismTuner.md) as a new tuner +* Support [network morphism](NetworkmorphismTuner.md) as a new tuner #### Training Service improvements @@ -146,8 +146,8 @@ * [Kubeflow Training service](./KubeflowMode.md) * Support tf-operator * [Distributed trial example](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-distributed/dist_mnist.py) on Kubeflow -* [Grid search tuner](gridsearchTuner.md) -* [Hyperband tuner](hyperbandAdvisor.md) +* [Grid search tuner](GridsearchTuner.md) +* [Hyperband tuner](HyperbandAdvisor.md) * Support launch NNI experiment on MAC * WebUI * UI support for hyperband tuner @@ -182,7 +182,7 @@ ``` * Support updating max trial number. - use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](NNICTLDOC.md) for the fully usage of NNICTL. + use `nnictl update --help` to learn more. Or refer to [NNICTL Spec](Nnictl.md) for the fully usage of NNICTL. ### API new features and updates @@ -227,10 +227,10 @@ ### Major Features -* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PAIMode.md) for instructions about how to submit NNI job in pai mode) +* Support [OpenPAI](https://github.com/Microsoft/pai) Training Platform (See [here](./PaiMode.md) for instructions about how to submit NNI job in pai mode) * Support training services on pai mode. NNI trials will be scheduled to run on OpenPAI cluster * NNI trial's output (including logs and model file) will be copied to OpenPAI HDFS for further debugging and checking -* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](smacTuner.md) for instructions about how to use SMAC tuner) +* Support [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) tuner (See [here](SmacTuner.md) for instructions about how to use SMAC tuner) * [SMAC](https://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf) is based on Sequential Model-Based Optimization (SMBO). It adapts the most prominent previously used model class (Gaussian stochastic process models) and introduces the model class of random forests to SMBO to handle categorical parameters. The SMAC supported by NNI is a wrapper on [SMAC3](https://github.com/automl/SMAC3) * Support NNI installation on [conda](https://conda.io/docs/index.html) and python virtual environment * Others diff --git a/docs/en_US/RemoteMachineMode.md b/docs/en_US/RemoteMachineMode.md index 2d18dc7c71..f5e0aa3859 100644 --- a/docs/en_US/RemoteMachineMode.md +++ b/docs/en_US/RemoteMachineMode.md @@ -65,4 +65,4 @@ nnictl create --config ~/nni/examples/trials/mnist-annotation/config_remote.yml to start the experiment. ## version check -NNI support version check feature in since version 0.6, [refer](PAIMode.md) \ No newline at end of file +NNI support version check feature in since version 0.6, [refer](PaiMode.md) \ No newline at end of file diff --git a/docs/en_US/SearchSpaceSpec.md b/docs/en_US/SearchSpaceSpec.md index f13300c7ce..0a5f06737f 100644 --- a/docs/en_US/SearchSpaceSpec.md +++ b/docs/en_US/SearchSpaceSpec.md @@ -6,7 +6,7 @@ In NNI, tuner will sample parameters/architecture according to the search space, To define a search space, users should define the name of variable, the type of sampling strategy and its parameters. -* A example of search space definition as follow: +* An example of search space definition as follow: ```yaml { @@ -26,9 +26,18 @@ Take the first line as an example. `dropout_rate` is defined as a variable whose All types of sampling strategies and their parameter are listed here: * {"_type":"choice","_value":options} - * Which means the variable value is one of the options, which should be a list. The elements of options can themselves be [nested] stochastic expressions. In this case, the stochastic choices that only appear in some of the options become conditional parameters. + + * Which means the variable's value is one of the options. Here 'options' should be a list. Each element of options is a number of string. It could also be a nested sub-search-space, this sub-search-space takes effect only when the corresponding element is chosen. The variables in this sub-search-space could be seen as conditional variables. + + * An simple [example](../../examples/trials/mnist-cascading-search-space/search_space.json) of [nested] search space definition. If an element in the options list is a dict, it is a sub-search-space, and for our built-in tuners you have to add a key '_name' in this dict, which helps you to identify which element is chosen. Accordingly, here is a [sample](../../examples/trials/mnist-cascading-search-space/sample.json) which users can get from nni with nested search space definition. Tuners which support nested search space is as follows: + + - Random Search + - TPE + - Anneal + - Evolution * {"_type":"randint","_value":[upper]} + * Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now. * {"_type":"uniform","_value":[low, high]} @@ -48,6 +57,7 @@ All types of sampling strategies and their parameter are listed here: * Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below. * {"_type":"normal","_value":[mu, sigma]} + * Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable. * {"_type":"qnormal","_value":[mu, sigma, q]} @@ -55,6 +65,7 @@ All types of sampling strategies and their parameter are listed here: * Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded. * {"_type":"lognormal","_value":[mu, sigma]} + * Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive. * {"_type":"qlognormal","_value":[mu, sigma, q]} diff --git a/docs/en_US/SetupNNIDeveloperEnvironment.md b/docs/en_US/SetupNniDeveloperEnvironment.md similarity index 95% rename from docs/en_US/SetupNNIDeveloperEnvironment.md rename to docs/en_US/SetupNniDeveloperEnvironment.md index 2bd47d129a..737354abce 100644 --- a/docs/en_US/SetupNNIDeveloperEnvironment.md +++ b/docs/en_US/SetupNniDeveloperEnvironment.md @@ -63,4 +63,4 @@ After the code changes, use **step 3** to rebuild your codes, then the changes w --- At last, wish you have a wonderful day. -For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [CONTRIBUTING](./CONTRIBUTING.md) document. +For more contribution guidelines on making PR's or issues to NNI source code, you can refer to our [Contributing](./Contributing.md) document. diff --git a/docs/en_US/sklearn_examples.md b/docs/en_US/SklearnExamples.md similarity index 100% rename from docs/en_US/sklearn_examples.md rename to docs/en_US/SklearnExamples.md diff --git a/docs/en_US/smacTuner.md b/docs/en_US/SmacTuner.md similarity index 100% rename from docs/en_US/smacTuner.md rename to docs/en_US/SmacTuner.md diff --git a/docs/en_US/SQuAD_evolution_examples.md b/docs/en_US/SquadEvolutionExamples.md similarity index 100% rename from docs/en_US/SQuAD_evolution_examples.md rename to docs/en_US/SquadEvolutionExamples.md diff --git a/docs/en_US/Trials.md b/docs/en_US/Trials.md index 2bc8f27f07..382ef65024 100644 --- a/docs/en_US/Trials.md +++ b/docs/en_US/Trials.md @@ -41,14 +41,14 @@ RECEIVED_PARAMS = nni.get_next_parameter() ```python nni.report_intermediate_result(metrics) ``` -`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](Builtin_Assessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. +`metrics` could be any python object. If users use NNI built-in tuner/assessor, `metrics` can only have two formats: 1) a number e.g., float, int, 2) a dict object that has a key named `default` whose value is a number. This `metrics` is reported to [assessor](BuiltinAssessors.md). Usually, `metrics` could be periodically evaluated loss or accuracy. - Report performance of the configuration ```python nni.report_final_result(metrics) ``` -`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](Builtin_Tuner.md). +`metrics` also could be any python object. If users use NNI built-in tuner/assessor, `metrics` follows the same format rule as that in `report_intermediate_result`, the number indicates the model's performance, for example, the model's accuracy, loss etc. This `metrics` is reported to [tuner](BuiltinTuner.md). ### Step 3 - Enable NNI API @@ -156,8 +156,8 @@ For more information, please refer to [HowToDebug](HowToDebug.md) ## More Trial Examples -* [MNIST examples](mnist_examples.md) -* [Finding out best optimizer for Cifar10 classification](cifar10_examples.md) -* [How to tune Scikit-learn on NNI](sklearn_examples.md) -* [Automatic Model Architecture Search for Reading Comprehension.](SQuAD_evolution_examples.md) -* [Tuning GBDT on NNI](gbdt_example.md) +* [MNIST examples](MnistExamples.md) +* [Finding out best optimizer for Cifar10 classification](Cifar10Examples.md) +* [How to tune Scikit-learn on NNI](SklearnExamples.md) +* [Automatic Model Architecture Search for Reading Comprehension.](SquadEvolutionExamples.md) +* [Tuning GBDT on NNI](GbdtExample.md) diff --git a/docs/en_US/advanced.rst b/docs/en_US/advanced.rst index 3362096805..2ed7dc420f 100644 --- a/docs/en_US/advanced.rst +++ b/docs/en_US/advanced.rst @@ -2,5 +2,5 @@ Advanced Features ===================== .. toctree:: - MultiPhase - AdvancedNAS \ No newline at end of file + MultiPhase + AdvancedNas \ No newline at end of file diff --git a/docs/en_US/assessors.rst b/docs/en_US/assessors.rst index b65bd49f5a..2229782280 100644 --- a/docs/en_US/assessors.rst +++ b/docs/en_US/assessors.rst @@ -15,5 +15,5 @@ Like Tuners, users can either use built-in Assessors, or customize an Assessor o .. toctree:: :maxdepth: 2 - Builtin Assessors - Customized Assessors + Builtin Assessors + Customized Assessors diff --git a/docs/en_US/builtinAssessor.rst b/docs/en_US/builtinAssessor.rst deleted file mode 100644 index 5616570794..0000000000 --- a/docs/en_US/builtinAssessor.rst +++ /dev/null @@ -1,9 +0,0 @@ -Builtin-Assessors -================= - -.. toctree:: - :maxdepth: 1 - - Overview - Medianstop - Curvefitting \ No newline at end of file diff --git a/docs/en_US/builtinTuner.rst b/docs/en_US/builtinTuner.rst deleted file mode 100644 index ad9853c97f..0000000000 --- a/docs/en_US/builtinTuner.rst +++ /dev/null @@ -1,18 +0,0 @@ -Builtin-Tuners -================== - -.. toctree:: - :maxdepth: 1 - - Overview - TPE - Random Search - Anneal - Naive Evolution - SMAC - Batch Tuner - Grid Search - Hyperband - Network Morphism - Metis Tuner - BOHB \ No newline at end of file diff --git a/docs/en_US/builtin_assessor.rst b/docs/en_US/builtin_assessor.rst new file mode 100644 index 0000000000..c59743512c --- /dev/null +++ b/docs/en_US/builtin_assessor.rst @@ -0,0 +1,9 @@ +Builtin-Assessors +================= + +.. toctree:: + :maxdepth: 1 + + Overview + Medianstop + Curvefitting \ No newline at end of file diff --git a/docs/en_US/builtin_tuner.rst b/docs/en_US/builtin_tuner.rst new file mode 100644 index 0000000000..5066d35edc --- /dev/null +++ b/docs/en_US/builtin_tuner.rst @@ -0,0 +1,18 @@ +Builtin-Tuners +================== + +.. toctree:: + :maxdepth: 1 + + Overview + TPE + Random Search + Anneal + Naive Evolution + SMAC + Batch Tuner + Grid Search + Hyperband + Network Morphism + Metis Tuner + BOHB \ No newline at end of file diff --git a/docs/en_US/Contribution.rst b/docs/en_US/contribution.rst similarity index 52% rename from docs/en_US/Contribution.rst rename to docs/en_US/contribution.rst index 9107f039d9..3e2853b74c 100644 --- a/docs/en_US/Contribution.rst +++ b/docs/en_US/contribution.rst @@ -3,5 +3,5 @@ Contribute to NNI ############################### .. toctree:: - Development Setup - Contribution Guide \ No newline at end of file + Development Setup + Contribution Guide \ No newline at end of file diff --git a/docs/en_US/examples.rst b/docs/en_US/examples.rst new file mode 100644 index 0000000000..92183d1997 --- /dev/null +++ b/docs/en_US/examples.rst @@ -0,0 +1,12 @@ +###################### +Examples +###################### + +.. toctree:: + :maxdepth: 2 + + MNIST + Cifar10 + Scikit-learn + EvolutionSQuAD + GBDT diff --git a/docs/en_US/index.rst b/docs/en_US/index.rst index c253b47c60..dc7d64a7e2 100644 --- a/docs/en_US/index.rst +++ b/docs/en_US/index.rst @@ -13,10 +13,10 @@ Contents Overview QuickStart - Tutorials - Examples - Reference + Tutorials + Examples + Reference FAQ - Contribution - Changelog + Contribution + Changelog Blog diff --git a/docs/en_US/Reference.rst b/docs/en_US/reference.rst similarity index 89% rename from docs/en_US/Reference.rst rename to docs/en_US/reference.rst index f1d82c04d5..4d502e30f7 100644 --- a/docs/en_US/Reference.rst +++ b/docs/en_US/reference.rst @@ -4,7 +4,7 @@ References .. toctree:: :maxdepth: 3 - Command Line + Command Line Python API Annotation Configuration diff --git a/docs/en_US/training_services.rst b/docs/en_US/training_services.rst index 24798675f5..1cf6dd552f 100644 --- a/docs/en_US/training_services.rst +++ b/docs/en_US/training_services.rst @@ -4,6 +4,6 @@ Introduction to NNI Training Services .. toctree:: Local Remote - OpenPAI + OpenPAI Kubeflow FrameworkController \ No newline at end of file diff --git a/docs/en_US/tuners.rst b/docs/en_US/tuners.rst index ea181f6ec2..471db7037c 100644 --- a/docs/en_US/tuners.rst +++ b/docs/en_US/tuners.rst @@ -13,6 +13,6 @@ For details, please refer to the following tutorials: .. toctree:: :maxdepth: 2 - Builtin Tuners - Customized Tuners - Customized Advisor \ No newline at end of file + Builtin Tuners + Customized Tuners + Customized Advisor \ No newline at end of file diff --git a/docs/en_US/Tutorials.rst b/docs/en_US/tutorials.rst similarity index 100% rename from docs/en_US/Tutorials.rst rename to docs/en_US/tutorials.rst diff --git a/examples/trials/mnist-cascading-search-space/mnist.py b/examples/trials/mnist-cascading-search-space/mnist.py index 1e5e4dfa3d..91aaa690fb 100644 --- a/examples/trials/mnist-cascading-search-space/mnist.py +++ b/examples/trials/mnist-cascading-search-space/mnist.py @@ -131,21 +131,29 @@ def main(params): nni.report_final_result(test_acc) -def generate_defualt_params(): - params = {'data_dir': '/tmp/tensorflow/mnist/input_data', - 'batch_num': 1000, - 'batch_size': 200} - return params - +def get_params(): + ''' Get parameters from command line ''' + parser = argparse.ArgumentParser() + parser.add_argument("--data_dir", type=str, default='/tmp/tensorflow/mnist/input_data', help="data directory") + parser.add_argument("--batch_num", type=int, default=1000) + parser.add_argument("--batch_size", type=int, default=200) + args, _ = parser.parse_known_args() + return args def parse_init_json(data): params = {} for key in data: value = data[key] - if value == 'Empty': + layer_name = value["_name"] + if layer_name == 'Empty': + # Empty Layer params[key] = ['Empty'] + elif layer_name == 'Conv': + # Conv layer + params[key] = [layer_name, value['kernel_size'], value['kernel_size']] else: - params[key] = [value[0], value[1], value[1]] + # Pooling Layer + params[key] = [layer_name, value['pooling_size'], value['pooling_size']] return params @@ -157,7 +165,7 @@ def parse_init_json(data): RCV_PARAMS = parse_init_json(data) logger.debug(RCV_PARAMS) - params = generate_defualt_params() + params = vars(get_params()) params.update(RCV_PARAMS) print(RCV_PARAMS) diff --git a/examples/trials/mnist-cascading-search-space/sample.json b/examples/trials/mnist-cascading-search-space/sample.json index 77b0cbfd90..518dfeb05f 100644 --- a/examples/trials/mnist-cascading-search-space/sample.json +++ b/examples/trials/mnist-cascading-search-space/sample.json @@ -1,12 +1,17 @@ { - "layer2": "Empty", - "layer8": ["Conv", 2], - "layer3": ["Avg_pool", 5], - "layer0": ["Max_pool", 5], - "layer1": ["Conv", 2], - "layer6": ["Max_pool", 3], - "layer7": ["Max_pool", 5], - "layer9": ["Conv", 2], - "layer4": ["Avg_pool", 3], - "layer5": ["Avg_pool", 5] -} + "layer0": { + "_name": "Avg_pool", + "pooling_size": 3 + }, + "layer1": { + "_name": "Conv", + "kernel_size": 2 + }, + "layer2": { + "_name": "Empty" + }, + "layer3": { + "_name": "Conv", + "kernel_size": 5 + } +} \ No newline at end of file diff --git a/examples/trials/mnist-cascading-search-space/search_space.json b/examples/trials/mnist-cascading-search-space/search_space.json index caf9c36117..4f35ddb354 100644 --- a/examples/trials/mnist-cascading-search-space/search_space.json +++ b/examples/trials/mnist-cascading-search-space/search_space.json @@ -1,62 +1,114 @@ { - "layer0":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer1":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer2":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer3":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer4":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer5":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer6":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer7":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer8":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]}, - "layer9":{"_type":"choice","_value":[ - "Empty", - ["Conv", {"_type":"choice","_value":[2,3,5]}], - ["Max_pool", {"_type":"choice","_value":[2,3,5]}], - ["Avg_pool", {"_type":"choice","_value":[2,3,5]}] - ]} + "layer0": { + "_type": "choice", + "_value": [{ + "_name": "Empty" + }, + { + "_name": "Conv", + "kernel_size": { + "_type": "choice", + "_value": [1, 2, 3, 5] + } + }, + { + "_name": "Max_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + }, + { + "_name": "Avg_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + } + ] + }, + "layer1": { + "_type": "choice", + "_value": [{ + "_name": "Empty" + }, + { + "_name": "Conv", + "kernel_size": { + "_type": "choice", + "_value": [1, 2, 3, 5] + } + }, + { + "_name": "Max_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + }, + { + "_name": "Avg_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + } + ] + }, + "layer2": { + "_type": "choice", + "_value": [{ + "_name": "Empty" + }, + { + "_name": "Conv", + "kernel_size": { + "_type": "choice", + "_value": [1, 2, 3, 5] + } + }, + { + "_name": "Max_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + }, + { + "_name": "Avg_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + } + ] + }, + "layer3": { + "_type": "choice", + "_value": [{ + "_name": "Empty" + }, + { + "_name": "Conv", + "kernel_size": { + "_type": "choice", + "_value": [1, 2, 3, 5] + } + }, + { + "_name": "Max_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + }, + { + "_name": "Avg_pool", + "pooling_size": { + "_type": "choice", + "_value": [2, 3, 5] + } + } + ] + } } \ No newline at end of file diff --git a/src/nni_manager/core/nnimanager.ts b/src/nni_manager/core/nnimanager.ts index 53bd21ab3a..9eee97c91e 100644 --- a/src/nni_manager/core/nnimanager.ts +++ b/src/nni_manager/core/nnimanager.ts @@ -372,7 +372,7 @@ class NNIManager implements Manager { private async periodicallyUpdateExecDuration(): Promise { let count: number = 1; - while (this.status.status !== 'STOPPING' && this.status.status !== 'STOPPED') { + while (!['ERROR', 'STOPPING', 'STOPPED'].includes(this.status.status)) { await delay(1000 * 1); // 1 seconds if (this.status.status === 'RUNNING') { this.experimentProfile.execDuration += 1; @@ -461,7 +461,7 @@ class NNIManager implements Manager { } let allFinishedTrialJobNum: number = this.currSubmittedTrialNum; let waitSubmittedToFinish: number; - while (this.status.status !== 'STOPPING' && this.status.status !== 'STOPPED') { + while (!['ERROR', 'STOPPING', 'STOPPED'].includes(this.status.status)) { const finishedTrialJobNum: number = await this.requestTrialJobsStatus(); allFinishedTrialJobNum += finishedTrialJobNum; @@ -671,7 +671,9 @@ class NNIManager implements Manager { 'ADD_HYPERPARAMETER', tunerCommand.trial_job_id, content, undefined); break; case NO_MORE_TRIAL_JOBS: - this.setStatus('TUNER_NO_MORE_TRIAL'); + if (!['ERROR', 'STOPPING', 'STOPPED'].includes(this.status.status)) { + this.setStatus('TUNER_NO_MORE_TRIAL'); + } break; case KILL_TRIAL_JOB: this.log.info(`cancelTrialJob: ${JSON.parse(content)}`); diff --git a/src/nni_manager/training_service/local/gpuScheduler.ts b/src/nni_manager/training_service/local/gpuScheduler.ts index ab4bb49a28..04ea3d3390 100644 --- a/src/nni_manager/training_service/local/gpuScheduler.ts +++ b/src/nni_manager/training_service/local/gpuScheduler.ts @@ -105,12 +105,16 @@ class GPUScheduler { } private async updateGPUSummary(): Promise { - const cmdresult: cpp.childProcessPromise.Result = - await execTail(path.join(this.gpuMetricCollectorScriptFolder, 'gpu_metrics')); - if (cmdresult && cmdresult.stdout) { - this.gpuSummary = JSON.parse(cmdresult.stdout); - } else { - this.log.error('Could not get gpu metrics information!'); + let gpuMetricPath = path.join(this.gpuMetricCollectorScriptFolder, 'gpu_metrics'); + if (fs.existsSync(gpuMetricPath)) { + const cmdresult: cpp.childProcessPromise.Result = await execTail(gpuMetricPath); + if (cmdresult && cmdresult.stdout) { + this.gpuSummary = JSON.parse(cmdresult.stdout); + } else { + this.log.error('Could not get gpu metrics information!'); + } + } else{ + this.log.warning('gpu_metrics file does not exist!') } } } diff --git a/src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py b/src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py index 881c0439cc..042848038f 100644 --- a/src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py +++ b/src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py @@ -21,7 +21,6 @@ bohb_advisor.py ''' -from enum import Enum, unique import sys import math import logging @@ -32,7 +31,7 @@ from nni.protocol import CommandType, send from nni.msg_dispatcher_base import MsgDispatcherBase -from nni.utils import extract_scalar_reward +from nni.utils import OptimizeMode, extract_scalar_reward from .config_generator import CG_BOHB @@ -42,12 +41,6 @@ _KEY = 'TRIAL_BUDGET' _epsilon = 1e-6 -@unique -class OptimizeMode(Enum): - """Optimize Mode class""" - Minimize = 'minimize' - Maximize = 'maximize' - def create_parameter_id(): """Create an id diff --git a/src/sdk/pynni/nni/evolution_tuner/evolution_tuner.py b/src/sdk/pynni/nni/evolution_tuner/evolution_tuner.py index 5da87b49f9..b46d560bed 100644 --- a/src/sdk/pynni/nni/evolution_tuner/evolution_tuner.py +++ b/src/sdk/pynni/nni/evolution_tuner/evolution_tuner.py @@ -18,61 +18,34 @@ # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ -evolution_tuner.py including: - class OptimizeMode - class Individual - class EvolutionTuner +evolution_tuner.py """ import copy -from enum import Enum, unique import random import numpy as np - from nni.tuner import Tuner -from nni.utils import extract_scalar_reward -from .. import parameter_expressions - -@unique -class OptimizeMode(Enum): - """Optimize Mode class +from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index - if OptimizeMode is 'minimize', it means the tuner need to minimize the reward - that received from Trial. +import nni.parameter_expressions as parameter_expressions - if OptimizeMode is 'maximize', it means the tuner need to maximize the reward - that received from Trial. - """ - Minimize = 'minimize' - Maximize = 'maximize' - -@unique -class NodeType(Enum): - """Node Type class - """ - Root = 'root' - Type = '_type' - Value = '_value' - Index = '_index' - - -def json2space(x, oldy=None, name=NodeType.Root.value): +def json2space(x, oldy=None, name=NodeType.ROOT): """Change search space from json format to hyperopt format """ y = list() if isinstance(x, dict): - if NodeType.Type.value in x.keys(): - _type = x[NodeType.Type.value] + if NodeType.TYPE in x.keys(): + _type = x[NodeType.TYPE] name = name + '-' + _type if _type == 'choice': if oldy != None: - _index = oldy[NodeType.Index.value] - y += json2space(x[NodeType.Value.value][_index], - oldy[NodeType.Value.value], name=name+'[%d]' % _index) + _index = oldy[NodeType.INDEX] + y += json2space(x[NodeType.VALUE][_index], + oldy[NodeType.VALUE], name=name+'[%d]' % _index) else: - y += json2space(x[NodeType.Value.value], None, name=name) + y += json2space(x[NodeType.VALUE], None, name=name) y.append(name) else: for key in x.keys(): @@ -80,28 +53,28 @@ def json2space(x, oldy=None, name=NodeType.Root.value): None else None), name+"[%s]" % str(key)) elif isinstance(x, list): for i, x_i in enumerate(x): + if isinstance(x_i, dict): + if NodeType.NAME not in x_i.keys(): + raise RuntimeError('\'_name\' key is not found in this nested search space.') y += json2space(x_i, (oldy[i] if oldy != None else None), name+"[%d]" % i) - else: - pass return y - -def json2paramater(x, is_rand, random_state, oldy=None, Rand=False, name=NodeType.Root.value): +def json2parameter(x, is_rand, random_state, oldy=None, Rand=False, name=NodeType.ROOT): """Json to pramaters. """ if isinstance(x, dict): - if NodeType.Type.value in x.keys(): - _type = x[NodeType.Type.value] - _value = x[NodeType.Value.value] + if NodeType.TYPE in x.keys(): + _type = x[NodeType.TYPE] + _value = x[NodeType.VALUE] name = name + '-' + _type Rand |= is_rand[name] if Rand is True: if _type == 'choice': _index = random_state.randint(len(_value)) y = { - NodeType.Index.value: _index, - NodeType.Value.value: json2paramater(x[NodeType.Value.value][_index], + NodeType.INDEX: _index, + NodeType.VALUE: json2parameter(x[NodeType.VALUE][_index], is_rand, random_state, None, @@ -116,39 +89,20 @@ def json2paramater(x, is_rand, random_state, oldy=None, Rand=False, name=NodeTyp else: y = dict() for key in x.keys(): - y[key] = json2paramater(x[key], is_rand, random_state, oldy[key] + y[key] = json2parameter(x[key], is_rand, random_state, oldy[key] if oldy != None else None, Rand, name + "[%s]" % str(key)) elif isinstance(x, list): y = list() for i, x_i in enumerate(x): - y.append(json2paramater(x_i, is_rand, random_state, oldy[i] + if isinstance(x_i, dict): + if NodeType.NAME not in x_i.keys(): + raise RuntimeError('\'_name\' key is not found in this nested search space.') + y.append(json2parameter(x_i, is_rand, random_state, oldy[i] if oldy != None else None, Rand, name + "[%d]" % i)) else: y = copy.deepcopy(x) return y - -def _split_index(params): - """Delete index information from params - - Parameters - ---------- - params : dict - - Returns - ------- - result : dict - """ - result = {} - for key in params: - if isinstance(params[key], dict): - value = params[key]['_value'] - else: - value = params[key] - result[key] = value - return result - - class Individual(object): """ Indicidual class to store the indv info. @@ -229,7 +183,7 @@ def update_search_space(self, search_space): for item in self.space: is_rand[item] = True for _ in range(self.population_size): - config = json2paramater( + config = json2parameter( self.searchspace_json, is_rand, self.random_state) self.population.append(Individual(config=config)) @@ -267,14 +221,14 @@ def generate_parameters(self, parameter_id): mutation_pos = space[random.randint(0, len(space)-1)] for i in range(len(self.space)): is_rand[self.space[i]] = (self.space[i] == mutation_pos) - config = json2paramater( + config = json2parameter( self.searchspace_json, is_rand, self.random_state, self.population[0].config) self.population.pop(1) # remove "_index" from config and save params-id total_config = config self.total_data[parameter_id] = total_config - config = _split_index(total_config) + config = split_index(total_config) return config def receive_trial_result(self, parameter_id, parameters, value): diff --git a/src/sdk/pynni/nni/evolution_tuner/test_evolution_tuner.py b/src/sdk/pynni/nni/evolution_tuner/test_evolution_tuner.py new file mode 100644 index 0000000000..982f48c544 --- /dev/null +++ b/src/sdk/pynni/nni/evolution_tuner/test_evolution_tuner.py @@ -0,0 +1,74 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), to deal in the Software without restriction, +# including without limitation the rights to use, copy, modify, merge, publish, +# distribute, sublicense, and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +""" +test_evolution_tuner.py +""" + +import numpy as np + +from unittest import TestCase, main + +from nni.evolution_tuner.evolution_tuner import json2space, json2parameter + + +class EvolutionTunerTestCase(TestCase): + def test_json2space(self): + """test for json2space + """ + json_search_space = { + "optimizer": { + "_type": "choice", + "_value": ["Adam", "SGD"] + }, + "learning_rate": { + "_type": "choice", + "_value": [0.0001, 0.001, 0.002, 0.005, 0.01] + } + } + search_space_instance = json2space(json_search_space) + self.assertIn('root[optimizer]-choice', search_space_instance) + self.assertIn('root[learning_rate]-choice', search_space_instance) + + def test_json2parameter(self): + """test for json2parameter + """ + json_search_space = { + "optimizer":{ + "_type":"choice","_value":["Adam", "SGD"] + }, + "learning_rate":{ + "_type":"choice", + "_value":[0.0001, 0.001, 0.002, 0.005, 0.01] + } + } + space = json2space(json_search_space) + random_state = np.random.RandomState() + is_rand = dict() + for item in space: + is_rand[item] = True + search_space_instance = json2parameter(json_search_space, is_rand, random_state) + self.assertIn(search_space_instance["optimizer"]["_index"], range(2)) + self.assertIn(search_space_instance["optimizer"]["_value"], ["Adam", "SGD"]) + self.assertIn(search_space_instance["learning_rate"]["_index"], range(5)) + self.assertIn(search_space_instance["learning_rate"]["_value"], [0.0001, 0.001, 0.002, 0.005, 0.01]) + + +if __name__ == '__main__': + main() diff --git a/src/sdk/pynni/nni/gridsearch_tuner/gridsearch_tuner.py b/src/sdk/pynni/nni/gridsearch_tuner/gridsearch_tuner.py index 80c8a9ecd2..fc5b520168 100644 --- a/src/sdk/pynni/nni/gridsearch_tuner/gridsearch_tuner.py +++ b/src/sdk/pynni/nni/gridsearch_tuner/gridsearch_tuner.py @@ -56,7 +56,7 @@ def __init__(self): self.expanded_search_space = [] self.supplement_data = dict() - def json2paramater(self, ss_spec): + def json2parameter(self, ss_spec): ''' generate all possible configs for hyperparameters from hyperparameter space. ss_spec: hyperparameter space @@ -68,7 +68,7 @@ def json2paramater(self, ss_spec): chosen_params = list() if _type == 'choice': for value in _value: - choice = self.json2paramater(value) + choice = self.json2parameter(value) if isinstance(choice, list): chosen_params.extend(choice) else: @@ -78,12 +78,12 @@ def json2paramater(self, ss_spec): else: chosen_params = dict() for key in ss_spec.keys(): - chosen_params[key] = self.json2paramater(ss_spec[key]) + chosen_params[key] = self.json2parameter(ss_spec[key]) return self.expand_parameters(chosen_params) elif isinstance(ss_spec, list): chosen_params = list() for subspec in ss_spec[1:]: - choice = self.json2paramater(subspec) + choice = self.json2parameter(subspec) if isinstance(choice, list): chosen_params.extend(choice) else: @@ -135,7 +135,7 @@ def update_search_space(self, search_space): ''' Check if the search space is valid and expand it: only contains 'choice' type or other types beginnning with the letter 'q' ''' - self.expanded_search_space = self.json2paramater(search_space) + self.expanded_search_space = self.json2parameter(search_space) def generate_parameters(self, parameter_id): self.count += 1 diff --git a/src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py b/src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py index fa6b391911..7590672de7 100644 --- a/src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py +++ b/src/sdk/pynni/nni/hyperband_advisor/hyperband_advisor.py @@ -21,7 +21,6 @@ hyperband_advisor.py """ -from enum import Enum, unique import sys import math import copy @@ -31,8 +30,9 @@ from nni.protocol import CommandType, send from nni.msg_dispatcher_base import MsgDispatcherBase -from nni.utils import extract_scalar_reward -from .. import parameter_expressions +from nni.common import init_logger +from nni.utils import NodeType, OptimizeMode, extract_scalar_reward +import nni.parameter_expressions as parameter_expressions _logger = logging.getLogger(__name__) @@ -40,11 +40,6 @@ _KEY = 'TRIAL_BUDGET' _epsilon = 1e-6 -@unique -class OptimizeMode(Enum): - """Oprimize Mode class""" - Minimize = 'minimize' - Maximize = 'maximize' def create_parameter_id(): """Create an id @@ -82,7 +77,7 @@ def create_bracket_parameter_id(brackets_id, brackets_curr_decay, increased_id=- increased_id]) return params_id -def json2paramater(ss_spec, random_state): +def json2parameter(ss_spec, random_state): """Randomly generate values for hyperparameters from hyperparameter space i.e., x. Parameters @@ -98,23 +93,23 @@ def json2paramater(ss_spec, random_state): Parameters in this experiment """ if isinstance(ss_spec, dict): - if '_type' in ss_spec.keys(): - _type = ss_spec['_type'] - _value = ss_spec['_value'] + if NodeType.TYPE in ss_spec.keys(): + _type = ss_spec[NodeType.TYPE] + _value = ss_spec[NodeType.VALUE] if _type == 'choice': _index = random_state.randint(len(_value)) - chosen_params = json2paramater(ss_spec['_value'][_index], random_state) + chosen_params = json2parameter(ss_spec[NodeType.VALUE][_index], random_state) else: chosen_params = eval('parameter_expressions.' + # pylint: disable=eval-used _type)(*(_value + [random_state])) else: chosen_params = dict() for key in ss_spec.keys(): - chosen_params[key] = json2paramater(ss_spec[key], random_state) + chosen_params[key] = json2parameter(ss_spec[key], random_state) elif isinstance(ss_spec, list): chosen_params = list() for _, subspec in enumerate(ss_spec): - chosen_params.append(json2paramater(subspec, random_state)) + chosen_params.append(json2parameter(subspec, random_state)) else: chosen_params = copy.deepcopy(ss_spec) return chosen_params @@ -246,7 +241,7 @@ def get_hyperparameter_configurations(self, num, r, searchspace_json, random_sta hyperparameter_configs = dict() for _ in range(num): params_id = create_bracket_parameter_id(self.bracket_id, self.i) - params = json2paramater(searchspace_json, random_state) + params = json2parameter(searchspace_json, random_state) params[_KEY] = r hyperparameter_configs[params_id] = params self._record_hyper_configs(hyperparameter_configs) diff --git a/src/sdk/pynni/nni/hyperopt_tuner/hyperopt_tuner.py b/src/sdk/pynni/nni/hyperopt_tuner/hyperopt_tuner.py index f5c9bca2c5..650d4c2ffc 100644 --- a/src/sdk/pynni/nni/hyperopt_tuner/hyperopt_tuner.py +++ b/src/sdk/pynni/nni/hyperopt_tuner/hyperopt_tuner.py @@ -17,39 +17,22 @@ # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -''' +""" hyperopt_tuner.py -''' +""" import copy import logging -from enum import Enum, unique -import numpy as np - import hyperopt as hp +import numpy as np from nni.tuner import Tuner -from nni.utils import extract_scalar_reward +from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index logger = logging.getLogger('hyperopt_AutoML') -@unique -class OptimizeMode(Enum): - """ - Optimize Mode including Minimize and Maximize - """ - Minimize = 'minimize' - Maximize = 'maximize' - - -ROOT = 'root' -TYPE = '_type' -VALUE = '_value' -INDEX = '_index' - - -def json2space(in_x, name=ROOT): +def json2space(in_x, name=NodeType.ROOT): """ Change json to search space in hyperopt. @@ -58,16 +41,16 @@ def json2space(in_x, name=ROOT): in_x : dict/list/str/int/float The part of json. name : str - name could be ROOT, TYPE, VALUE or INDEX. + name could be NodeType.ROOT, NodeType.TYPE, NodeType.VALUE or NodeType.INDEX, NodeType.NAME. """ out_y = copy.deepcopy(in_x) if isinstance(in_x, dict): - if TYPE in in_x.keys(): - _type = in_x[TYPE] + if NodeType.TYPE in in_x.keys(): + _type = in_x[NodeType.TYPE] name = name + '-' + _type - _value = json2space(in_x[VALUE], name=name) + _value = json2space(in_x[NodeType.VALUE], name=name) if _type == 'choice': - out_y = eval('hp.hp.'+_type)(name, _value) + out_y = eval('hp.hp.choice')(name, _value) else: if _type in ['loguniform', 'qloguniform']: _value[:2] = np.log(_value[:2]) @@ -75,69 +58,92 @@ def json2space(in_x, name=ROOT): else: out_y = dict() for key in in_x.keys(): - out_y[key] = json2space(in_x[key], name+'[%s]' % str(key)) + out_y[key] = json2space(in_x[key], name + '[%s]' % str(key)) elif isinstance(in_x, list): out_y = list() for i, x_i in enumerate(in_x): - out_y.append(json2space(x_i, name+'[%d]' % i)) - else: - logger.info('in_x is not a dict or a list in json2space fuinction %s', str(in_x)) + if isinstance(x_i, dict): + if NodeType.NAME not in x_i.keys(): + raise RuntimeError( + '\'_name\' key is not found in this nested search space.' + ) + out_y.append(json2space(x_i, name + '[%d]' % i)) return out_y -def json2parameter(in_x, parameter, name=ROOT): +def json2parameter(in_x, parameter, name=NodeType.ROOT): """ Change json to parameters. """ out_y = copy.deepcopy(in_x) if isinstance(in_x, dict): - if TYPE in in_x.keys(): - _type = in_x[TYPE] + if NodeType.TYPE in in_x.keys(): + _type = in_x[NodeType.TYPE] name = name + '-' + _type if _type == 'choice': _index = parameter[name] out_y = { - INDEX: _index, - VALUE: json2parameter(in_x[VALUE][_index], parameter, name=name+'[%d]' % _index) + NodeType.INDEX: + _index, + NodeType.VALUE: + json2parameter(in_x[NodeType.VALUE][_index], + parameter, + name=name + '[%d]' % _index) } else: out_y = parameter[name] else: out_y = dict() for key in in_x.keys(): - out_y[key] = json2parameter( - in_x[key], parameter, name + '[%s]' % str(key)) + out_y[key] = json2parameter(in_x[key], parameter, + name + '[%s]' % str(key)) elif isinstance(in_x, list): out_y = list() for i, x_i in enumerate(in_x): + if isinstance(x_i, dict): + if NodeType.NAME not in x_i.keys(): + raise RuntimeError( + '\'_name\' key is not found in this nested search space.' + ) out_y.append(json2parameter(x_i, parameter, name + '[%d]' % i)) - else: - logger.info('in_x is not a dict or a list in json2space fuinction %s', str(in_x)) return out_y -def json2vals(in_x, vals, out_y, name=ROOT): +def json2vals(in_x, vals, out_y, name=NodeType.ROOT): if isinstance(in_x, dict): - if TYPE in in_x.keys(): - _type = in_x[TYPE] + if NodeType.TYPE in in_x.keys(): + _type = in_x[NodeType.TYPE] name = name + '-' + _type try: - out_y[name] = vals[INDEX] + out_y[name] = vals[NodeType.INDEX] # TODO - catch exact Exception except Exception: out_y[name] = vals if _type == 'choice': - _index = vals[INDEX] - json2vals(in_x[VALUE][_index], vals[VALUE], - out_y, name=name + '[%d]' % _index) + _index = vals[NodeType.INDEX] + json2vals(in_x[NodeType.VALUE][_index], + vals[NodeType.VALUE], + out_y, + name=name + '[%d]' % _index) else: for key in in_x.keys(): - json2vals(in_x[key], vals[key], out_y, name + '[%s]' % str(key)) + json2vals(in_x[key], vals[key], out_y, + name + '[%s]' % str(key)) elif isinstance(in_x, list): for i, temp in enumerate(in_x): - json2vals(temp, vals[i], out_y, name + '[%d]' % i) + # nested json + if isinstance(temp, dict): + if NodeType.NAME not in temp.keys(): + raise RuntimeError( + '\'_name\' key is not found in this nested search space.' + ) + else: + json2vals(temp, vals[i], out_y, name + '[%d]' % i) + else: + json2vals(temp, vals[i], out_y, name + '[%d]' % i) + def _add_index(in_x, parameter): """ @@ -156,41 +162,36 @@ def _add_index(in_x, parameter): value_type = in_x[TYPE] value_format = in_x[VALUE] if value_type == "choice": - choice_name = parameter[0] if isinstance(parameter, list) else parameter - for pos, item in enumerate(value_format): # here value_format is a list - if isinstance(item, list): # this format is ["choice_key", format_dict] + choice_name = parameter[0] if isinstance(parameter, + list) else parameter + for pos, item in enumerate( + value_format): # here value_format is a list + if isinstance( + item, + list): # this format is ["choice_key", format_dict] choice_key = item[0] choice_value_format = item[1] if choice_key == choice_name: - return {INDEX: pos, VALUE: [choice_name, _add_index(choice_value_format, parameter[1])]} + return { + INDEX: + pos, + VALUE: [ + choice_name, + _add_index(choice_value_format, parameter[1]) + ] + } elif choice_name == item: return {INDEX: pos, VALUE: item} else: return parameter -def _split_index(params): - """ - Delete index infromation from params - """ - if isinstance(params, list): - return [params[0], _split_index(params[1])] - elif isinstance(params, dict): - if INDEX in params.keys(): - return _split_index(params[VALUE]) - result = dict() - for key in params: - result[key] = _split_index(params[key]) - return result - else: - return params - class HyperoptTuner(Tuner): """ HyperoptTuner is a tuner which using hyperopt algorithm. """ - def __init__(self, algorithm_name, optimize_mode = 'minimize'): + def __init__(self, algorithm_name, optimize_mode='minimize'): """ Parameters ---------- @@ -234,11 +235,16 @@ def update_search_space(self, search_space): search_space_instance = json2space(self.json) rstate = np.random.RandomState() trials = hp.Trials() - domain = hp.Domain(None, search_space_instance, + domain = hp.Domain(None, + search_space_instance, pass_expr_memo_ctrl=None) algorithm = self._choose_tuner(self.algorithm_name) - self.rval = hp.FMinIter(algorithm, domain, trials, - max_evals=-1, rstate=rstate, verbose=0) + self.rval = hp.FMinIter(algorithm, + domain, + trials, + max_evals=-1, + rstate=rstate, + verbose=0) self.rval.catch_eval_exceptions = False def generate_parameters(self, parameter_id): @@ -259,7 +265,7 @@ def generate_parameters(self, parameter_id): # but it can cause deplicate parameter rarely total_params = self.get_suggestion(random_search=True) self.total_data[parameter_id] = total_params - params = _split_index(total_params) + params = split_index(total_params) return params def receive_trial_result(self, parameter_id, parameters, value): @@ -300,7 +306,7 @@ def receive_trial_result(self, parameter_id, parameters, value): json2vals(self.json, vals, out_y) vals = out_y for key in domain.params: - if key in [VALUE, INDEX]: + if key in [NodeType.VALUE, NodeType.INDEX]: continue if key not in vals or vals[key] is None or vals[key] == []: idxs[key] = vals[key] = [] @@ -308,17 +314,23 @@ def receive_trial_result(self, parameter_id, parameters, value): idxs[key] = [new_id] vals[key] = [vals[key]] - self.miscs_update_idxs_vals(rval_miscs, idxs, vals, + self.miscs_update_idxs_vals(rval_miscs, + idxs, + vals, idxs_map={new_id: new_id}, assert_all_vals_used=False) - trial = trials.new_trial_docs([new_id], rval_specs, rval_results, rval_miscs)[0] + trial = trials.new_trial_docs([new_id], rval_specs, rval_results, + rval_miscs)[0] trial['result'] = {'loss': reward, 'status': 'ok'} trial['state'] = hp.JOB_STATE_DONE trials.insert_trial_docs([trial]) trials.refresh() - def miscs_update_idxs_vals(self, miscs, idxs, vals, + def miscs_update_idxs_vals(self, + miscs, + idxs, + vals, assert_all_vals_used=True, idxs_map=None): """ @@ -368,9 +380,10 @@ def get_suggestion(self, random_search=False): algorithm = rval.algo new_ids = rval.trials.new_trial_ids(1) rval.trials.refresh() - random_state = rval.rstate.randint(2**31-1) + random_state = rval.rstate.randint(2**31 - 1) if random_search: - new_trials = hp.rand.suggest(new_ids, rval.domain, trials, random_state) + new_trials = hp.rand.suggest(new_ids, rval.domain, trials, + random_state) else: new_trials = algorithm(new_ids, rval.domain, trials, random_state) rval.trials.refresh() @@ -396,7 +409,8 @@ def import_data(self, data): """ _completed_num = 0 for trial_info in data: - logger.info("Importing data, current processing progress %s / %s" %(_completed_num, len(data))) + logger.info("Importing data, current processing progress %s / %s" % + (_completed_num, len(data))) _completed_num += 1 if self.algorithm_name == 'random_search': return @@ -405,10 +419,16 @@ def import_data(self, data): assert "value" in trial_info _value = trial_info['value'] if not _value: - logger.info("Useless trial data, value is %s, skip this trial data." %_value) + logger.info( + "Useless trial data, value is %s, skip this trial data." % + _value) continue self.supplement_data_num += 1 - _parameter_id = '_'.join(["ImportData", str(self.supplement_data_num)]) - self.total_data[_parameter_id] = _add_index(in_x=self.json, parameter=_params) - self.receive_trial_result(parameter_id=_parameter_id, parameters=_params, value=_value) + _parameter_id = '_'.join( + ["ImportData", str(self.supplement_data_num)]) + self.total_data[_parameter_id] = _add_index(in_x=self.json, + parameter=_params) + self.receive_trial_result(parameter_id=_parameter_id, + parameters=_params, + value=_value) logger.info("Successfully import data to TPE/Anneal tuner.") diff --git a/src/sdk/pynni/nni/hyperopt_tuner/test_hyperopt_tuner.py b/src/sdk/pynni/nni/hyperopt_tuner/test_hyperopt_tuner.py new file mode 100644 index 0000000000..61ad422f5e --- /dev/null +++ b/src/sdk/pynni/nni/hyperopt_tuner/test_hyperopt_tuner.py @@ -0,0 +1,104 @@ +# Copyright (c) Microsoft Corporation +# All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, +# to any person obtaining a copy of this software and associated +# documentation files (the "Software"), to deal in the Software without restriction, +# including without limitation the rights to use, copy, modify, merge, publish, +# distribute, sublicense, and/or sell copies of the Software, and +# to permit persons to whom the Software is furnished to do so, subject to the following conditions: +# The above copyright notice and this permission notice shall be included +# in all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING +# BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +""" +test_hyperopt_tuner.py +""" + +from unittest import TestCase, main + +import hyperopt as hp + +from nni.hyperopt_tuner.hyperopt_tuner import json2space, json2parameter, json2vals + + +class HyperoptTunerTestCase(TestCase): + def test_json2space(self): + """test for json2space + """ + json_search_space = { + "optimizer": { + "_type": "choice", + "_value": ["Adam", "SGD"] + }, + "learning_rate": { + "_type": "choice", + "_value": [0.0001, 0.001, 0.002, 0.005, 0.01] + } + } + search_space_instance = json2space(json_search_space) + self.assertIsInstance(search_space_instance["optimizer"], + hp.pyll.base.Apply) + self.assertIsInstance(search_space_instance["learning_rate"], + hp.pyll.base.Apply) + + def test_json2parameter(self): + """test for json2parameter + """ + json_search_space = { + "optimizer": { + "_type": "choice", + "_value": ["Adam", "SGD"] + }, + "learning_rate": { + "_type": "choice", + "_value": [0.0001, 0.001, 0.002, 0.005, 0.01] + } + } + parameter = { + 'root[learning_rate]-choice': 2, + 'root[optimizer]-choice': 0 + } + search_space_instance = json2parameter(json_search_space, parameter) + self.assertEqual(search_space_instance["optimizer"]["_index"], 0) + self.assertEqual(search_space_instance["optimizer"]["_value"], "Adam") + self.assertEqual(search_space_instance["learning_rate"]["_index"], 2) + self.assertEqual(search_space_instance["learning_rate"]["_value"], 0.002) + + def test_json2vals(self): + """test for json2vals + """ + json_search_space = { + "optimizer": { + "_type": "choice", + "_value": ["Adam", "SGD"] + }, + "learning_rate": { + "_type": "choice", + "_value": [0.0001, 0.001, 0.002, 0.005, 0.01] + } + } + out_y = dict() + vals = { + 'optimizer': { + '_index': 0, + '_value': 'Adam' + }, + 'learning_rate': { + '_index': 1, + '_value': 0.001 + } + } + json2vals(json_search_space, vals, out_y) + self.assertEqual(out_y["root[optimizer]-choice"], 0) + self.assertEqual(out_y["root[learning_rate]-choice"], 1) + + +if __name__ == '__main__': + main() diff --git a/src/sdk/pynni/nni/metis_tuner/metis_tuner.py b/src/sdk/pynni/nni/metis_tuner/metis_tuner.py index 7355a750e8..5701232ede 100644 --- a/src/sdk/pynni/nni/metis_tuner/metis_tuner.py +++ b/src/sdk/pynni/nni/metis_tuner/metis_tuner.py @@ -38,17 +38,10 @@ import nni.metis_tuner.Regression_GP.Prediction as gp_prediction import nni.metis_tuner.Regression_GP.Selection as gp_selection from nni.tuner import Tuner -from nni.utils import extract_scalar_reward +from nni.utils import OptimizeMode, extract_scalar_reward logger = logging.getLogger("Metis_Tuner_AutoML") -@unique -class OptimizeMode(Enum): - """ - Optimize Mode class - """ - Minimize = 'minimize' - Maximize = 'maximize' NONE_TYPE = '' diff --git a/src/sdk/pynni/nni/networkmorphism_tuner/networkmorphism_tuner.py b/src/sdk/pynni/nni/networkmorphism_tuner/networkmorphism_tuner.py index 112576c41d..00d27b29f2 100644 --- a/src/sdk/pynni/nni/networkmorphism_tuner/networkmorphism_tuner.py +++ b/src/sdk/pynni/nni/networkmorphism_tuner/networkmorphism_tuner.py @@ -23,10 +23,10 @@ from nni.tuner import Tuner -from nni.utils import extract_scalar_reward +from nni.utils import OptimizeMode, extract_scalar_reward from nni.networkmorphism_tuner.bayesian import BayesianOptimizer from nni.networkmorphism_tuner.nn import CnnGenerator, MlpGenerator -from nni.networkmorphism_tuner.utils import Constant, OptimizeMode +from nni.networkmorphism_tuner.utils import Constant from nni.networkmorphism_tuner.graph import graph_to_json, json_to_graph diff --git a/src/sdk/pynni/nni/networkmorphism_tuner/utils.py b/src/sdk/pynni/nni/networkmorphism_tuner/utils.py index 6e4970e5b0..6ba95ea58b 100644 --- a/src/sdk/pynni/nni/networkmorphism_tuner/utils.py +++ b/src/sdk/pynni/nni/networkmorphism_tuner/utils.py @@ -18,16 +18,6 @@ # OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # ================================================================================================== -from enum import Enum, unique - -@unique -class OptimizeMode(Enum): - """ - Oprimize Mode class - """ - - Minimize = "minimize" - Maximize = "maximize" class Constant: '''Constant for the Tuner. diff --git a/src/sdk/pynni/nni/smac_tuner/smac_tuner.py b/src/sdk/pynni/nni/smac_tuner/smac_tuner.py index c9da7bb331..d6217367ff 100644 --- a/src/sdk/pynni/nni/smac_tuner/smac_tuner.py +++ b/src/sdk/pynni/nni/smac_tuner/smac_tuner.py @@ -22,7 +22,7 @@ """ from nni.tuner import Tuner -from nni.utils import extract_scalar_reward +from nni.utils import OptimizeMode, extract_scalar_reward import sys import logging @@ -37,11 +37,6 @@ from smac.facade.roar_facade import ROAR from smac.facade.epils_facade import EPILS -@unique -class OptimizeMode(Enum): - """Oprimize Mode class""" - Minimize = 'minimize' - Maximize = 'maximize' class SMACTuner(Tuner): """ diff --git a/src/sdk/pynni/nni/utils.py b/src/sdk/pynni/nni/utils.py index f3342f5395..4df75a58f1 100644 --- a/src/sdk/pynni/nni/utils.py +++ b/src/sdk/pynni/nni/utils.py @@ -17,11 +17,54 @@ # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT # OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # ================================================================================================== +""" +utils.py +""" import os +from enum import Enum, unique + from .common import init_logger from .env_vars import dispatcher_env_vars +@unique +class OptimizeMode(Enum): + """Optimize Mode class + + if OptimizeMode is 'minimize', it means the tuner need to minimize the reward + that received from Trial. + + if OptimizeMode is 'maximize', it means the tuner need to maximize the reward + that received from Trial. + """ + Minimize = 'minimize' + Maximize = 'maximize' + +class NodeType: + """Node Type class + """ + ROOT = 'root' + TYPE = '_type' + VALUE = '_value' + INDEX = '_index' + NAME = '_name' + + +def split_index(params): + """ + Delete index infromation from params + """ + if isinstance(params, dict): + if NodeType.INDEX in params.keys(): + return split_index(params[NodeType.VALUE]) + result = {} + for key in params: + result[key] = split_index(params[key]) + return result + else: + return params + + def extract_scalar_reward(value, scalar_key='default'): """ Extract scalar reward from trial result. diff --git a/src/sdk/pynni/tests/test_utils.py b/src/sdk/pynni/tests/test_utils.py new file mode 100644 index 0000000000..fbc8976b7a --- /dev/null +++ b/src/sdk/pynni/tests/test_utils.py @@ -0,0 +1,105 @@ +# Copyright (c) Microsoft Corporation. All rights reserved. +# +# MIT License +# +# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and +# associated documentation files (the "Software"), to deal in the Software without restriction, +# including without limitation the rights to use, copy, modify, merge, publish, distribute, +# sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all copies or +# substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT +# NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +# DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT +# OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +# ================================================================================================== + +from unittest import TestCase, main + +import nni +from nni.utils import split_index + + +class UtilsTestCase(TestCase): + def test_split_index_normal(self): + """test for normal search space + """ + normal__params_with_index = { + "dropout_rate": { + "_index" : 1, + "_value" : 0.9 + }, + "hidden_size": { + "_index" : 1, + "_value" : 512 + } + } + normal__params= { + "dropout_rate": 0.9, + "hidden_size": 512 + } + + params = split_index(normal__params_with_index) + self.assertEqual(params, normal__params) + + def test_split_index_nested(self): + """test for nested search space + """ + nested_params_with_index = { + "layer0": { + "_name": "Avg_pool", + "pooling_size":{ + "_index" : 1, + "_value" : 2 + } + }, + "layer1": { + "_name": "Empty" + }, + "layer2": { + "_name": "Max_pool", + "pooling_size": { + "_index" : 2, + "_value" : 3 + } + }, + "layer3": { + "_name": "Conv", + "kernel_size": { + "_index" : 3, + "_value" : 5 + }, + "output_filters": { + "_index" : 3, + "_value" : 64 + } + } + } + nested_params = { + "layer0": { + "_name": "Avg_pool", + "pooling_size": 2 + }, + "layer1": { + "_name": "Empty" + }, + "layer2": { + "_name": "Max_pool", + "pooling_size": 3 + }, + "layer3": { + "_name": "Conv", + "kernel_size": 5, + "output_filters": 64 + } + } + params = split_index(nested_params_with_index) + self.assertEqual(params, nested_params) + + +if __name__ == '__main__': + main() \ No newline at end of file diff --git a/test/config_test/examples/mnist-cascading-search-space.test.yml b/test/config_test/examples/mnist-cascading-search-space.test.yml new file mode 100644 index 0000000000..1904afac64 --- /dev/null +++ b/test/config_test/examples/mnist-cascading-search-space.test.yml @@ -0,0 +1,24 @@ +authorName: nni +experimentName: default_test +maxExecDuration: 5m +maxTrialNum: 4 +trialConcurrency: 2 +searchSpacePath: ../../../examples/trials/mnist-cascading-search-space/search_space.json + +tuner: + #choice: TPE, Random, Anneal, Evolution + builtinTunerName: TPE +assessor: + builtinAssessorName: Medianstop + classArgs: + optimize_mode: maximize +trial: + codeDir: ../../../examples/trials/mnist-cascading-search-space + command: python3 mnist.py --batch_num 100 + gpuNum: 0 + +useAnnotation: false +multiPhase: false +multiThread: false + +trainingServicePlatform: local diff --git a/tools/README.md b/tools/README.md index 78983fc9ea..e215e893ef 100644 --- a/tools/README.md +++ b/tools/README.md @@ -54,4 +54,4 @@ python >= 3.5 please reference to the [NNI CTL document]. -[NNI CTL document]: ../docs/en_US/NNICTLDOC.md +[NNI CTL document]: ../docs/en_US/Nnictl.md diff --git a/tools/nni_annotation/search_space_generator.py b/tools/nni_annotation/search_space_generator.py index 833d989c1d..ed200ce934 100644 --- a/tools/nni_annotation/search_space_generator.py +++ b/tools/nni_annotation/search_space_generator.py @@ -21,6 +21,7 @@ import ast import astor +import numbers # pylint: disable=unidiomatic-typecheck @@ -87,8 +88,9 @@ def visit_Call(self, node): # pylint: disable=invalid-name args = [key.n if type(key) is ast.Num else key.s for key in node.args[0].keys] else: # arguments of other functions must be literal number - assert all(type(arg) is ast.Num for arg in node.args), 'Smart parameter\'s arguments must be number literals' - args = [arg.n for arg in node.args] + assert all(isinstance(ast.literal_eval(astor.to_source(arg)), numbers.Real) for arg in node.args), \ + 'Smart parameter\'s arguments must be number literals' + args = [ast.literal_eval(astor.to_source(arg)) for arg in node.args] key = self.module_name + '/' + name + '/' + func # store key in ast.Call