Skip to content

Commit

Permalink
Merge pull request #42 from microsoft/master
Browse files Browse the repository at this point in the history
pull code
  • Loading branch information
chicm-ms authored Nov 4, 2019
2 parents 4a3ba83 + eea5078 commit c8a1148
Show file tree
Hide file tree
Showing 53 changed files with 1,022 additions and 553 deletions.
31 changes: 27 additions & 4 deletions azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,23 +8,42 @@ jobs:
PYTHON_VERSION: '3.6'

steps:
- script: python3 -m pip install --upgrade pip setuptools --user
- script: |
python3 -m pip install --upgrade pip setuptools --user
python3 -m pip install pylint==2.3.1 astroid==2.2.5 --user
python3 -m pip install coverage --user
displayName: 'Install python tools'
- script: |
source install.sh
displayName: 'Install nni toolkit via source code'
- script: |
python3 -m pip install torch==0.4.1 --user
python3 -m pip install torchvision==0.2.1 --user
python3 -m pip install tensorflow==1.13.1 --user
python3 -m pip install keras==2.1.6 --user
python3 -m pip install gym onnx --user
sudo apt-get install swig -y
PATH=$HOME/.local/bin:$PATH nnictl package install --name=SMAC
PATH=$HOME/.local/bin:$PATH nnictl package install --name=BOHB
displayName: 'Install dependencies'
- script: |
source install.sh
displayName: 'Install nni toolkit via source code'
set -e
python3 -m pylint --rcfile pylintrc nni_annotation
python3 -m pylint --rcfile pylintrc nni_cmd
python3 -m pylint --rcfile pylintrc nni_gpu_tool
python3 -m pylint --rcfile pylintrc nni_trial_tool
python3 -m pylint --rcfile pylintrc nni
python3 -m pylint --rcfile pylintrc nnicli
displayName: 'Run pylint'
- script: |
python3 -m pip install flake8 --user
IGNORE=./tools/nni_annotation/testcase/*:F821,./examples/trials/mnist-nas/*/mnist*.py:F821,./examples/trials/nas_cifar10/src/cifar10/general_child.py:F821
python3 -m flake8 . --count --per-file-ignores=$IGNORE --select=E9,F63,F72,F82 --show-source --statistics
displayName: 'Run flake8 tests to find Python syntax errors and undefined names'
- script: |
cd test
sudo apt install -y swig
PATH=$HOME/.local/bin:$PATH nnictl package install --name=SMAC
source unittest.sh
displayName: 'Unit test'
- script: |
Expand Down Expand Up @@ -65,7 +84,11 @@ jobs:
displayName: 'Install nni toolkit via source code'
- script: |
cd test
PATH=$HOME/Library/Python/3.7/bin:$PATH && source unittest.sh
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null
brew install swig@3
ln -s /usr/local/opt/swig\@3/bin/swig /usr/local/bin/swig
PATH=$HOME/Library/Python/3.7/bin:$PATH nnictl package install --name=SMAC
PATH=$HOME/Library/Python/3.7/bin:$PATH source unittest.sh
displayName: 'Unit test'
- script: |
cd test
Expand Down
10 changes: 5 additions & 5 deletions docs/en_US/Compressor/AutoCompression.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ You can easily compress a model with NNI compression. Take pruning for example,
```python
from nni.compression.torch import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(config_list)
pruner(model)
pruner = LevelPruner(model, config_list)
pruner.compress()
```

The 'default' op_type stands for the module types defined in [default_layers.py](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/compression/torch/default_layers.py) for pytorch.

Therefore ```{ 'sparsity': 0.8, 'op_types': ['default'] }```means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ```pruner(model)``` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.
Therefore ```{ 'sparsity': 0.8, 'op_types': ['default'] }```means that **all layers with specified op_types will be compressed with the same 0.8 sparsity**. When ```pruner.compress()``` called, the model is compressed with masks and after that you can normally fine tune this model and **pruned weights won't be updated** which have been masked.

## Then, make this automatic

Expand Down Expand Up @@ -84,9 +84,9 @@ config_list_agp = [{'initial_sparsity': 0, 'final_sparsity': conv0_sparsity,
{'initial_sparsity': 0, 'final_sparsity': conv1_sparsity,
'start_epoch': 0, 'end_epoch': 3,
'frequency': 1,'op_name': 'conv1' },]
PRUNERS = {'level':LevelPruner(config_list_level),'agp':AGP_Pruner(config_list_agp)}
PRUNERS = {'level':LevelPruner(model, config_list_level),'agp':AGP_Pruner(model, config_list_agp)}
pruner = PRUNERS(params['prune_method']['_name'])
pruner(model)
pruner.compress()
... # fine tuning
acc = evaluate(model) # evaluation
nni.report_final_results(acc)
Expand Down
85 changes: 47 additions & 38 deletions docs/en_US/Compressor/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,22 +25,22 @@ Tensorflow code
```python
from nni.compression.tensorflow import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(config_list)
pruner(tf.get_default_graph())
pruner = LevelPruner(tf.get_default_graph(), config_list)
pruner.compress()
```

PyTorch code

```python
from nni.compression.torch import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(config_list)
pruner(model)
pruner = LevelPruner(model, config_list)
pruner.compress()
```

You can use other compression algorithms in the package of `nni.compression`. The algorithms are implemented in both PyTorch and Tensorflow, under `nni.compression.torch` and `nni.compression.tensorflow` respectively. You can refer to [Pruner](./Pruner.md) and [Quantizer](./Quantizer.md) for detail description of supported algorithms.

The function call `pruner(model)` receives user defined model (in Tensorflow the model can be obtained with `tf.get_default_graph()`, while in PyTorch the model is the defined model class), and the model is modified with masks inserted. Then when you run the model, the masks take effect. The masks can be adjusted at runtime by the algorithms.
The function call `pruner.compress()` modifies user defined model (in Tensorflow the model can be obtained with `tf.get_default_graph()`, while in PyTorch the model is the defined model class), and the model is modified with masks inserted. Then when you run the model, the masks take effect. The masks can be adjusted at runtime by the algorithms.

When instantiate a compression algorithm, there is `config_list` passed in. We describe how to write this config below.

Expand Down Expand Up @@ -111,20 +111,26 @@ If you want to write a new pruning algorithm, you can write a class that inherit
# nni.compression.tensorflow.Pruner with
# nni.compression.torch.Pruner
class YourPruner(nni.compression.tensorflow.Pruner):
def __init__(self, config_list):
# suggest you to use the NNI defined spec for config
super().__init__(config_list)

def bind_model(self, model):
# this func can be used to remember the model or its weights
# in member variables, for getting their values during training
pass

def calc_mask(self, weight, config, **kwargs):
# weight is the target weight tensor
# config is the selected dict object in config_list for this layer
# kwargs contains op, op_types, and op_name
# design your mask and return your mask
def __init__(self, model, config_list):
"""
Suggest you to use the NNI defined spec for config
"""
super().__init__(model, config_list)

def calc_mask(self, layer, config):
"""
Pruners should overload this method to provide mask for weight tensors.
The mask must have the same shape and type comparing to the weight.
It will be applied with ``mul()`` operation on the weight.
This method is effectively hooked to ``forward()`` method of the model.
Parameters
----------
layer: LayerInfo
calculate mask for ``layer``'s weight
config: dict
the configuration for generating the mask
"""
return your_mask

# note for pytorch version, there is no sess in input arguments
Expand All @@ -133,16 +139,18 @@ class YourPruner(nni.compression.tensorflow.Pruner):

# note for pytorch version, there is no sess in input arguments
def step(self, sess):
# can do some processing based on the model or weights binded
# in the func bind_model
"""
Can do some processing based on the model or weights binded
in the func bind_model
"""
pass
```

For the simplest algorithm, you only need to override `calc_mask`. It receives each layer's weight and selected configuration, as well as op information. You generate the mask for this weight in this function and return. Then NNI applies the mask for you.
For the simplest algorithm, you only need to override ``calc_mask``. It receives the to-be-compressed layers one by one along with their compression configuration. You generate the mask for this weight in this function and return. Then NNI applies the mask for you.

Some algorithms generate mask based on training progress, i.e., epoch number. We provide `update_epoch` for the pruner to be aware of the training progress.
Some algorithms generate mask based on training progress, i.e., epoch number. We provide `update_epoch` for the pruner to be aware of the training progress. It should be called at the beginning of each epoch.

Some algorithms may want global information for generating masks, for example, all weights of the model (for statistic information), model optimizer's information. NNI supports this requirement using `bind_model`. `bind_model` receives the complete model, thus, it could record any information (e.g., reference to weights) it cares about. Then `step` can process or update the information according to the algorithm. You can refer to [source code of built-in algorithms](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/compressors) for example implementations.
Some algorithms may want global information for generating masks, for example, all weights of the model (for statistic information). Your can use `self.bound_model` in the Pruner class for accessing weights. If you also need optimizer's information (for example in Pytorch), you could override `__init__` to receive more arguments such as model's optimizer. Then `step` can process or update the information according to the algorithm. You can refer to [source code of built-in algorithms](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/compressors) for example implementations.

### Quantization algorithm

Expand All @@ -154,20 +162,19 @@ The interface for customizing quantization algorithm is similar to that of pruni
# nni.compression.tensorflow.Quantizer with
# nni.compression.torch.Quantizer
class YourQuantizer(nni.compression.tensorflow.Quantizer):
def __init__(self, config_list):
# suggest you to use the NNI defined spec for config
super().__init__(config_list)

def bind_model(self, model):
# this func can be used to remember the model or its weights
# in member variables, for getting their values during training
pass
def __init__(self, model, config_list):
"""
Suggest you to use the NNI defined spec for config
"""
super().__init__(model, config_list)

def quantize_weight(self, weight, config, **kwargs):
# weight is the target weight tensor
# config is the selected dict object in config_list for this layer
# kwargs contains op, op_types, and op_name
# design your quantizer and return new weight
"""
weight is the target weight tensor
config is the selected dict object in config_list for this layer
kwargs contains op, op_types, and op_name
design your quantizer and return new weight
"""
return new_weight

# note for pytorch version, there is no sess in input arguments
Expand All @@ -176,8 +183,10 @@ class YourQuantizer(nni.compression.tensorflow.Quantizer):

# note for pytorch version, there is no sess in input arguments
def step(self, sess):
# can do some processing based on the model or weights binded
# in the func bind_model
"""
Can do some processing based on the model or weights binded
in the func bind_model
"""
pass
```

Expand Down
16 changes: 8 additions & 8 deletions docs/en_US/Compressor/Pruner.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,16 @@ Tensorflow code
```
from nni.compression.tensorflow import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(config_list)
pruner(model_graph)
pruner = LevelPruner(model_graph, config_list)
pruner.compress()
```

PyTorch code
```
from nni.compression.torch import LevelPruner
config_list = [{ 'sparsity': 0.8, 'op_types': ['default'] }]
pruner = LevelPruner(config_list)
pruner(model)
pruner = LevelPruner(model, config_list)
pruner.compress()
```

#### User configuration for Level Pruner
Expand Down Expand Up @@ -53,8 +53,8 @@ config_list = [{
'frequency': 1,
'op_types': 'default'
}]
pruner = AGP_Pruner(config_list)
pruner(tf.get_default_graph())
pruner = AGP_Pruner(tf.get_default_graph(), config_list)
pruner.compress()
```
PyTorch code
```python
Expand All @@ -67,8 +67,8 @@ config_list = [{
'frequency': 1,
'op_types': 'default'
}]
pruner = AGP_Pruner(config_list)
pruner(model)
pruner = AGP_Pruner(model, config_list)
pruner.compress()
```

Second, you should add code below to update epoch number when you finish one epoch in your training code.
Expand Down
20 changes: 10 additions & 10 deletions docs/en_US/Compressor/Quantizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ We provide Naive Quantizer to quantizer weight to default 8 bits, you can use it
### Usage
tensorflow
```python
nni.compressors.tensorflow.NaiveQuantizer()(model_graph)
nni.compressors.tensorflow.NaiveQuantizer(model_graph).compress()
```
pytorch
```python
nni.compressors.torch.NaiveQuantizer()(model)
nni.compressors.torch.NaiveQuantizer(model).compress()
```

***
Expand All @@ -32,15 +32,15 @@ Tensorflow code
```python
from nni.compressors.tensorflow import QAT_Quantizer
config_list = [{ 'q_bits': 8, 'op_types': ['default'] }]
quantizer = QAT_Quantizer(config_list)
quantizer(tf.get_default_graph())
quantizer = QAT_Quantizer(tf.get_default_graph(), config_list)
quantizer.compress()
```
PyTorch code
```python
from nni.compressors.torch import QAT_Quantizer
config_list = [{ 'q_bits': 8, 'op_types': ['default'] }]
quantizer = QAT_Quantizer(config_list)
quantizer(model)
quantizer = QAT_Quantizer(model, config_list)
quantizer.compress()
```

You can view example for more information
Expand All @@ -61,15 +61,15 @@ Tensorflow code
```python
from nni.compressors.tensorflow import DoReFaQuantizer
config_list = [{ 'q_bits': 8, 'op_types': 'default' }]
quantizer = DoReFaQuantizer(config_list)
quantizer(tf.get_default_graph())
quantizer = DoReFaQuantizer(tf.get_default_graph(), config_list)
quantizer.compress()
```
PyTorch code
```python
from nni.compressors.torch import DoReFaQuantizer
config_list = [{ 'q_bits': 8, 'op_types': 'default' }]
quantizer = DoReFaQuantizer(config_list)
quantizer(model)
quantizer = DoReFaQuantizer(model, config_list)
quantizer.compress()
```

You can view example for more information
Expand Down
6 changes: 3 additions & 3 deletions docs/en_US/Tuner/BuiltinTuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ Its requirement of computation resource is relatively high. Specifically, it req

* **optimize_mode** (*maximize or minimize, optional, default = maximize*) - If 'maximize', the tuner will target to maximize metrics. If 'minimize', the tuner will target to minimize metrics.

* **population_size** (*int value (should > 0), optional, default = 20*) - the initial size of the population(trial num) in evolution tuner. Suggests `population_size` be much larger than `concurrency`, so users can get the most out of the algorithm (and at least `concurrency`, or the tuner will fail on their first generation of parameters).
* **population_size** (*int value (should > 0), optional, default = 20*) - the initial size of the population (trial num) in evolution tuner. Suggests `population_size` be much larger than `concurrency`, so users can get the most out of the algorithm (and at least `concurrency`, or the tuner will fail on their first generation of parameters).

**Usage example**

Expand All @@ -143,11 +143,11 @@ tuner:

> Built-in Tuner Name: **SMAC**

**Please note that SMAC doesn't support running on windows currently. The specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).**
**Please note that SMAC doesn't support running on Windows currently. The specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483).**

**Installation**

SMAC need to be installed by following command before first use.
SMAC need to be installed by following command before first use. As a reminder, `swig` is required for SMAC: for Ubuntu `swig` can be installed with `apt`.

```bash
nnictl package install --name=SMAC
Expand Down
Loading

0 comments on commit c8a1148

Please sign in to comment.