Skip to content

Commit

Permalink
Merge pull request #255 from microsoft/master
Browse files Browse the repository at this point in the history
merge master
  • Loading branch information
SparkSnail authored Jun 28, 2020
2 parents caeffb8 + 885a258 commit 57c300e
Show file tree
Hide file tree
Showing 119 changed files with 4,012 additions and 1,096 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@ Within the following table, we summarized the current NNI capabilities, we are g
<ul>
<li><a href="docs/en_US/Tuner/CustomizeTuner.md">CustomizeTuner</a></li>
<li><a href="docs/en_US/Assessor/CustomizeAssessor.md">CustomizeAssessor</a></li>
<li><a href="docs/en_US/Tutorial/InstallCustomizedAlgos.md">Install Customized Algorithms as Builtin Tuners/Assessors/Advisors</a></li>
</ul>
</td>
<td style="border-top:#FF0000 solid 0px;">
Expand Down
50 changes: 29 additions & 21 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ jobs:

steps:
- script: |
set -e
python3 -m pip install --upgrade pip setuptools --user
python3 -m pip install pylint==2.3.1 astroid==2.2.5 --user
python3 -m pip install coverage --user
echo "##vso[task.setvariable variable=PATH]${HOME}/.local/bin:${PATH}"
displayName: 'Install python tools'
Expand All @@ -25,29 +25,16 @@ jobs:
yarn eslint
displayName: 'Run eslint'
- script: |
set -e
python3 -m pip install torch==1.5.0+cpu torchvision==0.6.0+cpu -f https://download.pytorch.org/whl/torch_stable.html --user
python3 -m pip install tensorflow==1.15.2 --user
python3 -m pip install keras==2.1.6 --user
python3 -m pip install tensorflow==2.2.0 --user
python3 -m pip install keras==2.4.2 --user
python3 -m pip install gym onnx --user
python3 -m pip install sphinx==1.8.3 sphinx-argparse==0.2.5 sphinx-markdown-tables==0.0.9 sphinx-rtd-theme==0.4.2 sphinxcontrib-websupport==1.1.0 recommonmark==0.5.0 --user
sudo apt-get install swig -y
nnictl package install --name=SMAC
nnictl package install --name=BOHB
displayName: 'Install dependencies'
- script: |
set -e
python3 -m pylint --rcfile pylintrc nni_annotation
python3 -m pylint --rcfile pylintrc nni_cmd
python3 -m pylint --rcfile pylintrc nni_gpu_tool
python3 -m pylint --rcfile pylintrc nni_trial_tool
python3 -m pylint --rcfile pylintrc nni
python3 -m pylint --rcfile pylintrc nnicli
displayName: 'Run pylint'
- script: |
python3 -m pip install flake8 --user
EXCLUDES=./src/nni_manager/,./src/webui,./tools/nni_annotation/testcase/,./examples/trials/mnist-nas/*/mnist*.py,./examples/trials/nas_cifar10/src/cifar10/general_child.py
python3 -m flake8 . --count --exclude=$EXCLUDES --select=E9,F63,F72,F82 --show-source --statistics
displayName: 'Run flake8 tests to find Python syntax errors and undefined names'
- script: |
cd test
source scripts/unittest.sh
Expand All @@ -61,20 +48,23 @@ jobs:
sphinx-build -M html . _build -W
displayName: 'Sphinx Documentation Build check'
- job: 'ubuntu_1604_python35_legacy_torch'
- job: 'ubuntu_1604_python35_legacy_torch_tf'
pool:
vmImage: 'Ubuntu 16.04'

steps:
- script: |
set -e
python3 -m pip install --upgrade pip setuptools --user
python3 -m pip install pylint==2.3.1 astroid==2.2.5 --user
python3 -m pip install coverage --user
echo "##vso[task.setvariable variable=PATH]${HOME}/.local/bin:${PATH}"
displayName: 'Install python tools'
- script: |
source install.sh
displayName: 'Install nni toolkit via source code'
- script: |
set -e
python3 -m pip install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html --user
python3 -m pip install tensorflow==1.15.2 --user
python3 -m pip install keras==2.1.6 --user
Expand All @@ -83,6 +73,21 @@ jobs:
nnictl package install --name=SMAC
nnictl package install --name=BOHB
displayName: 'Install dependencies'
- script: |
set -e
python3 -m pylint --rcfile pylintrc nni_annotation
python3 -m pylint --rcfile pylintrc nni_cmd
python3 -m pylint --rcfile pylintrc nni_gpu_tool
python3 -m pylint --rcfile pylintrc nni_trial_tool
python3 -m pylint --rcfile pylintrc nni
python3 -m pylint --rcfile pylintrc nnicli
displayName: 'Run pylint'
- script: |
set -e
python3 -m pip install flake8 --user
EXCLUDES=./src/nni_manager/,./src/webui,./tools/nni_annotation/testcase/,./examples/trials/mnist-nas/*/mnist*.py,./examples/trials/nas_cifar10/src/cifar10/general_child.py
python3 -m flake8 . --count --exclude=$EXCLUDES --select=E9,F63,F72,F82 --show-source --statistics
displayName: 'Run flake8 tests to find Python syntax errors and undefined names'
- script: |
cd test
source scripts/unittest.sh
Expand All @@ -98,18 +103,21 @@ jobs:
vmImage: 'macOS-10.15'

steps:
- script: python3 -m pip install --upgrade pip setuptools
- script: |
python3 -m pip install --upgrade pip setuptools
echo "##vso[task.setvariable variable=PATH]${HOME}/Library/Python/3.7/bin:${PATH}"
displayName: 'Install python tools'
- script: |
echo "network-timeout 600000" >> ${HOME}/.yarnrc
source install.sh
echo "##vso[task.setvariable variable=PATH]${HOME}/Library/Python/3.7/bin:${PATH}"
displayName: 'Install nni toolkit via source code'
- script: |
set -e
# pytorch Mac binary does not support CUDA, default is cpu version
python3 -m pip install torchvision==0.6.0 torch==1.5.0 --user
python3 -m pip install tensorflow==1.15.2 --user
brew install swig@3
rm /usr/local/bin/swig
rm -f /usr/local/bin/swig
ln -s /usr/local/opt/swig\@3/bin/swig /usr/local/bin/swig
nnictl package install --name=SMAC
displayName: 'Install dependencies'
Expand Down
4 changes: 3 additions & 1 deletion deployment/pypi/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,11 +58,13 @@
'PythonWebHDFS',
'hyperopt==0.1.2',
'json_tricks',
'netifaces',
'numpy',
'scipy',
'coverage',
'colorama',
'scikit-learn>=0.20,<0.22'
'scikit-learn>=0.20,<0.22',
'pkginfo'
],
classifiers = [
'Programming Language :: Python :: 3',
Expand Down
12 changes: 11 additions & 1 deletion docs/en_US/Compressor/CompressionReference.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,16 @@
.. autoclass:: nni.compression.torch.utils.shape_dependency.ChannelDependency
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.MaskConflict
.. autoclass:: nni.compression.torch.utils.shape_dependency.GroupDependency
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.CatMaskPadding
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.GroupMaskConflict
:members:
.. autoclass:: nni.compression.torch.utils.mask_conflict.ChannelMaskConflict
:members:
```
6 changes: 2 additions & 4 deletions docs/en_US/Compressor/CompressionUtils.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,8 +116,6 @@ Set 12,layer4.1.conv1
When the masks of different layers in a model have conflict (for example, assigning different sparsities for the layers that have channel dependency), we can fix the mask conflict by MaskConflict. Specifically, the MaskConflict loads the masks exported by the pruners(L1FilterPruner, etc), and check if there is mask conflict, if so, MaskConflict sets the conflicting masks to the same value.

```
from nni.compression.torch.utils.mask_conflict import MaskConflict
mc = MaskConflict('./resnet18_mask', net, data)
mc.fix_mask_conflict()
mc.export('./resnet18_fixed_mask')
from nni.compression.torch.utils.mask_conflict import fix_mask_conflict
fixed_mask = fix_mask_conflict('./resnet18_mask', net, data)
```
1 change: 1 addition & 0 deletions docs/en_US/Compressor/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ Pruning algorithms compress the original network by removing redundant weights o
| [ActivationAPoZRankFilterPruner](./Pruner.md#ActivationAPoZRankFilterPruner) | Pruning filters based on the metric APoZ (average percentage of zeros) which measures the percentage of zeros in activations of (convolutional) layers. [Reference Paper](https://arxiv.org/abs/1607.03250) |
| [ActivationMeanRankFilterPruner](./Pruner.md#ActivationMeanRankFilterPruner) | Pruning filters based on the metric that calculates the smallest mean value of output activations |
| [Slim Pruner](./Pruner.md#slim-pruner) | Pruning channels in convolution layers by pruning scaling factors in BN layers(Learning Efficient Convolutional Networks through Network Slimming) [Reference Paper](https://arxiv.org/abs/1708.06519) |
| [TaylorFO Pruner](./Pruner.md#taylorfoweightfilterpruner) | Pruning filters based on the first order taylor expansion on weights(Importance Estimation for Neural Network Pruning) [Reference Paper](http://jankautz.com/publications/Importance4NNPruning_CVPR19.pdf) |


**Quantization**
Expand Down
33 changes: 33 additions & 0 deletions docs/en_US/NAS/ClassicNas.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Classic NAS Algorithms

In classic NAS algorithms, each architecture is trained as a trial and the NAS algorithm acts as a tuner. Thus, this training mode naturally fits within the NNI hyper-parameter tuning framework, where Tuner generates new architecture for the next trial and trials run in the training service.

## Quick Start

The following example shows how to use classic NAS algorithms. You can see it is quite similar to NNI hyper-parameter tuning.

```python
model = Net()

# get the chosen architecture from tuner and apply it on model
get_and_apply_next_architecture(model)
train(model) # your code for training the model
acc = test(model) # test the trained model
nni.report_final_result(acc) # report the performance of the chosen architecture
```

First, instantiate the model. Search space has been defined in this model through `LayerChoice` and `InputChoice`. After that, user should invoke `get_and_apply_next_architecture(model)` to settle down to a specific architecture. This function receives the architecture from tuner (i.e., the classic NAS algorithm) and applies the architecture to `model`. At this point, `model` becomes a specific architecture rather than a search space. Then users are free to train this model just like training a normal PyTorch model. After get the accuracy of this model, users should invoke `nni.report_final_result(acc)` to report the result to the tuner.

At this point, trial code is ready. Then, we can prepare an NNI experiment, i.e., search space file and experiment config file. Different from NNI hyper-parameter tuning, search space file is automatically generated from the trial code by running the command (the detailed usage of this command can be found [here](../Tutorial/Nnictl.md)):

`nnictl ss_gen --trial_command="the command for running your trial code"`

A file named `nni_auto_gen_search_space.json` is generated by this command. Then put the path of the generated search space in the field `searchSpacePath` of the experiment config file. The other fields of the config file can be filled by referring [this tutorial](../Tutorial/QuickStart.md).

Currently, we only support [PPO Tuner](../Tuner/BuiltinTuner.md) and [random tuner](https://github.com/microsoft/nni/tree/master/examples/tuners/random_nas_tuner) for classic NAS. More classic NAS algorithms will be supported soon.

The complete examples can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/classic_nas) for PyTorch and [here](https://github.com/microsoft/nni/tree/master/examples/nas/classic_nas-tf) for TensorFlow.

## Standalone mode for easy debugging

We support a standalone mode for easy debugging, where you can directly run the trial command without launching an NNI experiment. This is for checking whether your trial code can correctly run. The first candidate(s) are chosen for `LayerChoice` and `InputChoice` in this standalone mode.
Loading

0 comments on commit 57c300e

Please sign in to comment.