Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
Merge pull request #227 from microsoft/master
Browse files Browse the repository at this point in the history
Add test for documentation build (#1924)
  • Loading branch information
SparkSnail authored Jan 10, 2020
2 parents 9bafa4c + 3e39c96 commit c23b807
Show file tree
Hide file tree
Showing 10 changed files with 19 additions and 12 deletions.
5 changes: 5 additions & 0 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ jobs:
python3 -m pip install tensorflow==1.13.1 --user
python3 -m pip install keras==2.1.6 --user
python3 -m pip install gym onnx --user
python3 -m pip install sphinx==1.8.3 sphinx-argparse==0.2.5 sphinx-markdown-tables==0.0.9 sphinx-rtd-theme==0.4.2 sphinxcontrib-websupport==1.1.0 recommonmark==0.5.0 --user
sudo apt-get install swig -y
nnictl package install --name=SMAC
nnictl package install --name=BOHB
Expand Down Expand Up @@ -69,6 +70,10 @@ jobs:
cd test
python3 cli_test.py
displayName: 'nnicli test'
- script: |
cd docs/en_US/
sphinx-build -M html . _build -W
displayName: 'Sphinx Documentation Build check'
- job: 'basic_test_pr_macOS'
pool:
Expand Down
2 changes: 0 additions & 2 deletions docs/en_US/Compressor/Pruner.md
Original file line number Diff line number Diff line change
Expand Up @@ -342,5 +342,3 @@ You can view example for more information

- **sparsity:** How much percentage of convolutional filters are to be pruned.
- **op_types:** Only Conv2d is supported in ActivationMeanRankFilterPruner

***
6 changes: 2 additions & 4 deletions docs/en_US/Compressor/Quantizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,9 @@ Quantizer on NNI Compressor
We provide Naive Quantizer to quantizer weight to default 8 bits, you can use it to test quantize algorithm without any configure.

### Usage
tensorflow
```python nni.compression.tensorflow.NaiveQuantizer(model_graph).compress()
```
pytorch
```python nni.compression.torch.NaiveQuantizer(model).compress()
```python
model = nni.compression.torch.NaiveQuantizer(model).compress()
```

***
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Release.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
* Run trial jobs on the GPU running non-NNI jobs
* Kubeflow v1beta2 operator
* Support Kubeflow TFJob/PyTorchJob v1beta2
* [General NAS programming interface](AdvancedFeature/GeneralNasInterfaces.md)
* [General NAS programming interface](https://github.com/microsoft/nni/blob/v0.8/docs/en_US/GeneralNasInterfaces.md)
* Provide NAS programming interface for users to easily express their neural architecture search space through NNI annotation
* Provide a new command `nnictl trial codegen` for debugging the NAS code
* Tutorial of NAS programming interface, example of NAS on MNIST, customized random tuner for NAS
Expand Down Expand Up @@ -299,7 +299,7 @@
* Support [Metis tuner](Tuner/MetisTuner.md) as a new NNI tuner. Metis algorithm has been proofed to be well performed for **online** hyper-parameter tuning.
* Support [ENAS customized tuner](https://github.com/countif/enas_nni), a tuner contributed by github community user, is an algorithm for neural network search, it could learn neural network architecture via reinforcement learning and serve a better performance than NAS.
* Support [Curve fitting assessor](Assessor/CurvefittingAssessor.md) for early stop policy using learning curve extrapolation.
* Advanced Support of [Weight Sharing](AdvancedFeature/AdvancedNas.md): Enable weight sharing for NAS tuners, currently through NFS.
* Advanced Support of [Weight Sharing](https://github.com/microsoft/nni/blob/v0.5/docs/AdvancedNAS.md): Enable weight sharing for NAS tuners, currently through NFS.

#### Training Service Enhancement

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/TrainingService/PaiYarnMode.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ nnictl create --config exp_paiYarn.yml
```
to start the experiment in paiYarn mode. NNI will create OpenpaiYarn job for each trial, and the job name format is something like `nni_exp_{experiment_id}_trial_{trial_id}`.
You can see jobs created by NNI in the OpenpaiYarn cluster's web portal, like:
![](../../img/nni_paiYarn_joblist.jpg)
![](../../img/nni_pai_joblist.jpg)
Notice: In paiYarn mode, NNIManager will start a rest server and listen on a port which is your NNI WebUI's port plus 1. For example, if your WebUI port is `8080`, the rest server will listen on `8081`, to receive metrics from trial job running in Kubernetes. So you should `enable 8081` TCP port in your firewall rule to allow incoming traffic.
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'Release_v1.0.md']

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = None
Expand Down
1 change: 1 addition & 0 deletions docs/en_US/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,4 @@ Examples
EvolutionSQuAD<./TrialExample/SquadEvolutionExamples>
GBDT<./TrialExample/GbdtExample>
RocksDB <./TrialExample/RocksdbExamples>
KDExample <./TrialExample/KDExample>
2 changes: 1 addition & 1 deletion docs/en_US/model_compression.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ For details, please refer to the following tutorials:
Overview <Compressor/Overview>
Level Pruner <Compressor/Pruner>
AGP Pruner <Compressor/Pruner>
L1Filter Pruner <Compressor/L1FilterPruner>
L1Filter Pruner <Compressor/l1filterpruner>
Slim Pruner <Compressor/SlimPruner>
Lottery Ticket Pruner <Compressor/LotteryTicketHypothesis>
FPGM Pruner <Compressor/Pruner>
Expand Down
1 change: 1 addition & 0 deletions docs/en_US/training_services.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@ Introduction to NNI Training Services
Local<./TrainingService/LocalMode>
Remote<./TrainingService/RemoteMachineMode>
OpenPAI<./TrainingService/PaiMode>
OpenPAI Yarn Mode<./TrainingService/PaiYarnMode>
Kubeflow<./TrainingService/KubeflowMode>
FrameworkController<./TrainingService/FrameworkControllerMode>
6 changes: 5 additions & 1 deletion src/sdk/pynni/nni/ppo_tuner/ppo_tuner.py
Original file line number Diff line number Diff line change
Expand Up @@ -503,15 +503,19 @@ def generate_parameters(self, parameter_id, **kwargs):
"""
Generate parameters, if no trial configration for now, self.credit plus 1 to send the config later
Parameters
----------
parameter_id : int
Unique identifier for requested hyper-parameters. This will later be used in :meth:`receive_trial_result`.
Unique identifier for requested hyper-parameters.
This will later be used in :meth:`receive_trial_result`.
**kwargs
Not used
Returns
-------
dict
One newly generated configuration
"""
if self.first_inf:
self.trials_result = [None for _ in range(self.inf_batch_size)]
Expand Down

0 comments on commit c23b807

Please sign in to comment.