Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge master #270

Merged
merged 36 commits into from
Sep 10, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
bbb9137
Fix nnictl experiment delete (#2791)
SparkSnail Aug 14, 2020
593d2d2
fix nnictl experiment delete (delete log folder in ~/.local/nnictl) (…
JunweiSUN Aug 19, 2020
71403bf
Update GradientFeatureSelector.md (#2803)
tblanchart Aug 21, 2020
e408e14
Search space zoo example fix (#2801)
tabVersion Aug 24, 2020
013adb1
fix remote connection logic (#2812)
SparkSnail Aug 24, 2020
7be6053
fix env:Path inheritance in powershell & set UTF-8 for correctly disp…
J-shang Aug 24, 2020
08edcfb
[v1.8 bug-bash] fix webui error (#2808)
Lijiaoa Aug 24, 2020
bf5e779
fix typo in ENAS comments (#2806)
tabVersion Aug 24, 2020
64c7f29
Add the support for tanh. (#2816)
zheng-ningxin Aug 24, 2020
a8e77fe
update nnicli notebook (#2810)
JunweiSUN Aug 24, 2020
8c6a640
Refine NAS benchmark docs and examples (#2800)
ultmaster Aug 24, 2020
b168b01
Fix visualization for ENAS micro (#2813)
ultmaster Aug 24, 2020
c45c30b
bug bash for sensitivity_pruner (#2815)
zheng-ningxin Aug 24, 2020
e6ef08f
TF compression fix and UT (#2817)
liuzhe-lz Aug 24, 2020
da541bf
Hotfix installation problem on Windows without conda/venv (#2793)
liuzhe-lz Aug 24, 2020
4f70fb3
fix broken link in compression benchmark doc (#2823)
suiguoxin Aug 25, 2020
625a72d
Fix IT (#2827)
liuzhe-lz Aug 26, 2020
beeea32
[bug bash] issue 2706 (#2818)
zheng-ningxin Aug 26, 2020
9f44d54
use doc in master branch instead of release branch (#2826)
suiguoxin Aug 26, 2020
3d2abd4
Fix remote & kubeflow it (#2828)
SparkSnail Aug 26, 2020
20d5062
Fix tracking url in AML (#2830)
SparkSnail Aug 27, 2020
9fdcf9e
add \config in install.ps1 (#2835)
J-shang Aug 27, 2020
320407b
update image of aml doc (#2836)
JunweiSUN Aug 27, 2020
e06a9dd
Release note v1.8 (#2829)
ultmaster Aug 27, 2020
bf8be1e
Merge pull request #2837 from microsoft/v1.8
ultmaster Aug 28, 2020
23b892c
fix wrong order of hidden and cell state (fix #2839)
jyh2986 Aug 31, 2020
baa129f
typo fix in classic nas (#2821)
tabVersion Aug 31, 2020
195b7f9
Update BuiltinTuner.md (#2814)
scarlett2018 Sep 3, 2020
e51aca4
update the NNI architecture figure (#2805)
QuanluZhang Sep 7, 2020
5318dd4
Merge pull request #2842 from jyh2986/master
ultmaster Sep 7, 2020
c6ec21f
Fix windows remote pipeline (#2861)
SparkSnail Sep 7, 2020
4287980
shut validator ipc warning (#2864)
liuzhe-lz Sep 7, 2020
3ca752f
Bump tree-kill from 1.2.0 to 1.2.2 in /src/nni_manager (#2870)
dependabot[bot] Sep 7, 2020
3bc8104
azure-pipelines.yml: Upgrade pylint and astroid (#2669)
cclauss Sep 7, 2020
2a8b0f6
add it case: user installed builtin tuner (#2859)
J-shang Sep 8, 2020
0a21a90
fix some spelling (#2855)
J-shang Sep 9, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**.

### **[NNI v1.7 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
### **[NNI v1.8 has been released!](https://github.com/microsoft/nni/releases) &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**

## **NNI capabilities in a glance**

Expand Down Expand Up @@ -246,7 +246,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code.

```bash
git clone -b v1.7 https://github.com/Microsoft/nni.git
git clone -b v1.8 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down
2 changes: 1 addition & 1 deletion azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ jobs:
- script: |
set -e
python3 -m pip install --upgrade pip setuptools --user
python3 -m pip install pylint==2.3.1 astroid==2.2.5 --user
python3 -m pip install pylint==2.6.0 astroid==2.4.2 --user
python3 -m pip install coverage --user
python3 -m pip install thop --user
echo "##vso[task.setvariable variable=PATH]${HOME}/.local/bin:${PATH}"
Expand Down
1 change: 1 addition & 0 deletions deployment/pypi/install.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ $env:PATH = $NNI_NODE_FOLDER+';'+$env:PATH
cd $CWD\..\..\src\nni_manager
yarn
yarn build
Copy-Item config -Destination .\dist\ -Recurse -Force
cd $CWD\..\..\src\webui
yarn
yarn build
Expand Down
7 changes: 4 additions & 3 deletions docs/en_US/CommunitySharings/ModelCompressionComparison.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,8 @@ From the experiment result, we get the following conclusions:

* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.

* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/ModelSpeedup.md). This avoids potential issues of counting them of masked models.
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/ModelSpeedup.md).
This avoids potential issues of counting them of masked models.

* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).

Expand All @@ -75,8 +76,8 @@ From the experiment result, we get the following conclusions:
}
```

* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/experiment_data).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/experiment_data/analyze.py) to plot new performance comparison figures.
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.

## Contribution

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Compressor/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |

You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/Benchmark.md) for the performance of these pruners on some benchmark problems.
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.

### Quantization Algorithms

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/FeatureEngineering/GradientFeatureSelector.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## GradientFeatureSelector

The algorithm in GradinetFeatureSelector comes from ["Feature Gradients: Scalable Feature Selection via Discrete Relaxation"](https://arxiv.org/pdf/1908.10382.pdf).
The algorithm in GradientFeatureSelector comes from ["Feature Gradients: Scalable Feature Selection via Discrete Relaxation"](https://arxiv.org/pdf/1908.10382.pdf).

GradientFeatureSelector, a gradient-based search algorithm
for feature selection.
Expand Down
9 changes: 5 additions & 4 deletions docs/en_US/NAS/Benchmarks.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# NAS Benchmarks (experimental)
# NAS Benchmarks

```eval_rst
.. toctree::
Expand All @@ -8,12 +8,13 @@
```

## Introduction

To imporve the reproducibility of NAS algorithms as well as reducing computing resource requirements, researchers proposed a series of NAS benchmarks such as [NAS-Bench-101](https://arxiv.org/abs/1902.09635), [NAS-Bench-201](https://arxiv.org/abs/2001.00326), [NDS](https://arxiv.org/abs/1905.13214), etc. NNI provides a query interface for users to acquire these benchmarks. Within just a few lines of code, researcher are able to evaluate their NAS algorithms easily and fairly by utilizing these benchmarks.

## Prerequisites

* Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR` before importing NNI.
* Please install `peewee` via `pip install peewee`, which NNI uses to connect to database.
* Please prepare a folder to household all the benchmark databases. By default, it can be found at `${HOME}/.nni/nasbenchmark`. You can place it anywhere you like, and specify it in `NASBENCHMARK_DIR` via `export NASBENCHMARK_DIR=/path/to/your/nasbenchmark` before importing NNI.
* Please install `peewee` via `pip3 install peewee`, which NNI uses to connect to database.

## Data Preparation

Expand All @@ -24,7 +25,7 @@ To avoid storage and legality issues, we do not provide any prepared databases.
git clone -b ${NNI_VERSION} https://github.com/microsoft/nni
cd nni/examples/nas/benchmarks
```
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.7`.
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.8`.

2. Install dependencies via `pip3 install -r xxx.requirements.txt`. `xxx` can be `nasbench101`, `nasbench201` or `nds`.
3. Generate the database via `./xxx.sh`. The directory that stores the benchmark file can be configured with `NASBENCHMARK_DIR` environment variable, which defaults to `~/.nni/nasbenchmark`. Note that the NAS-Bench-201 dataset will be downloaded from a google drive.
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ NNI (Neural Network Intelligence) is a toolkit to help users design and tune mac
The figure below shows high-level architecture of NNI.

<p align="center">
<img src="https://user-images.githubusercontent.com/23273522/51816536-ed055580-2301-11e9-8ad8-605a79ee1b9a.png" alt="drawing" width="700"/>
<img src="https://user-images.githubusercontent.com/16907603/92089316-94147200-ee00-11ea-9944-bf3c4544257f.png" alt="drawing" width="700"/>
</p>

## Key Concepts
Expand Down Expand Up @@ -86,4 +86,4 @@ The auto-feature-engineering algorithms usually have a bunch of hyperparameters
* [Examples](TrialExample/MnistExamples.md)
* [Neural Architecture Search on NNI](NAS/Overview.md)
* [Automatic model compression on NNI](Compressor/Overview.md)
* [Automatic feature engineering on NNI](FeatureEngineering/Overview.md)
* [Automatic feature engineering on NNI](FeatureEngineering/Overview.md)
76 changes: 76 additions & 0 deletions docs/en_US/Release.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,81 @@
# ChangeLog

# Release 1.8 - 8/27/2020

## Major updates

### Training service

* Access trial log directly on WebUI (local mode only) (#2718)
* Add OpenPAI trial job detail link (#2703)
* Support GPU scheduler in reusable environment (#2627) (#2769)
* Add timeout for `web_channel` in `trial_runner` (#2710)
* Show environment error message in AzureML mode (#2724)
* Add more log information when copying data in OpenPAI mode (#2702)

### WebUI, nnictl and nnicli

* Improve hyper-parameter parallel coordinates plot (#2691) (#2759)
* Add pagination for trial job list (#2738) (#2773)
* Enable panel close when clicking overlay region (#2734)
* Remove support for Multiphase on WebUI (#2760)
* Support save and restore experiments (#2750)
* Add intermediate results in export result (#2706)
* Add [command](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Tutorial/Nnictl.md#nnictl-trial) to list trial results with highest/lowest metrics (#2747)
* Improve the user experience of [nnicli](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/nnicli_ref.md) with [examples](https://github.com/microsoft/nni/blob/v1.8/examples/notebooks/retrieve_nni_info_with_python.ipynb) (#2713)

### Neural architecture search

* [Search space zoo: ENAS and DARTS](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/NAS/SearchSpaceZoo.md) (#2589)
* API to query intermediate results in NAS benchmark (#2728)

### Model compression

* Support the List/Tuple Construct/Unpack operation for TorchModuleGraph (#2609)
* Model speedup improvement: Add support of DenseNet and InceptionV3 (#2719)
* Support the multiple successive tuple unpack operations (#2768)
* [Doc of comparing the performance of supported pruners](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/CommunitySharings/ModelCompressionComparison.md) (#2742)
* New pruners: [Sensitivity pruner](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Compressor/Pruner.md#sensitivity-pruner) (#2684) and [AMC pruner](https://github.com/microsoft/nni/blob/v1.8/docs/en_US/Compressor/Pruner.md) (#2573) (#2786)
* TensorFlow v2 support in model compression (#2755)

### Backward incompatible changes

* Update the default experiment folder from `$HOME/nni/experiments` to `$HOME/nni-experiments`. If you want to view the experiments created by previous NNI releases, you can move the experiments folders from `$HOME/nni/experiments` to `$HOME/nni-experiments` manually. (#2686) (#2753)
* Dropped support for Python 3.5 and scikit-learn 0.20 (#2778) (#2777) (2783) (#2787) (#2788) (#2790)

### Others

* Upgrade TensorFlow version in Docker image (#2732) (#2735) (#2720)

## Examples

* Remove gpuNum in assessor examples (#2641)

## Documentation

* Improve customized tuner documentation (#2628)
* Fix several typos and grammar mistakes in documentation (#2637 #2638, thanks @tomzx)
* Improve AzureML training service documentation (#2631)
* Improve CI of Chinese translation (#2654)
* Improve OpenPAI training service documenation (#2685)
* Improve documentation of community sharing (#2640)
* Add tutorial of Colab support (#2700)
* Improve documentation structure for model compression (#2676)

## Bug fixes

* Fix mkdir error in training service (#2673)
* Fix bug when using chmod in remote training service (#2689)
* Fix dependency issue by making `_graph_utils` imported inline (#2675)
* Fix mask issue in `SimulatedAnnealingPruner` (#2736)
* Fix intermediate graph zooming issue (#2738)
* Fix issue when dict is unordered when querying NAS benchmark (#2728)
* Fix import issue for gradient selector dataloader iterator (#2690)
* Fix support of adding tens of machines in remote training service (#2725)
* Fix several styling issues in WebUI (#2762 #2737)
* Fix support of unusual types in metrics including NaN and Infinity (#2782)
* Fix nnictl experiment delete (#2791)

# Release 1.7 - 7/8/2020

## Major Features
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/TrainingService/AMLMode.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,4 +89,4 @@ cd nni/examples/trials/mnist-tfv1

nnictl create --config config_aml.yml
```
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.7`.
Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.8`.
4 changes: 2 additions & 2 deletions docs/en_US/Tuner/BuiltinTuner.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Built-in Tuners for Hyperparameter Tuning
# HyperParameter Tuning with NNI Built-in Tuners

NNI provides state-of-the-art tuning algorithms as part of our built-in tuners and makes them easy to use. Below is the brief summary of NNI's current built-in tuners:
To fit a machine/deep learning model into different tasks/problems, hyperparameters always need to be tuned. Automating the process of hyperparaeter tuning always requires a good tuning algorithm. NNI has provided state-of-the-art tuning algorithms as part of our built-in tuners and makes them easy to use. Below is the brief summary of NNI's current built-in tuners:

Note: Click the **Tuner's name** to get the Tuner's installation requirements, suggested scenario, and an example configuration. A link for a detailed description of each algorithm is located at the end of the suggested scenario for each tuner. Here is an [article](../CommunitySharings/HpoComparison.md) comparing different Tuners on several problems.

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Tutorial/ExperimentConfig.md
Original file line number Diff line number Diff line change
Expand Up @@ -573,7 +573,7 @@ Used to specify designated GPU devices for NNI, if it is set, only the specified

#### maxTrialNumPerGpu

Optional. Integer. Default: 99999.
Optional. Integer. Default: 1.

Used to specify the max concurrency trial number on a GPU device.

Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Tutorial/InstallationLinux.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Installation on Linux and macOS follow the same instructions, given below.
Prerequisites: `python 64-bit >=3.6`, `git`, `wget`

```bash
git clone -b v1.7 https://github.com/Microsoft/nni.git
git clone -b v1.8 https://github.com/Microsoft/nni.git
cd nni
./install.sh
```
Expand All @@ -35,7 +35,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via cloning the source code.

```bash
git clone -b v1.7 https://github.com/Microsoft/nni.git
git clone -b v1.8 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Tutorial/InstallationWin.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ If you want to contribute to NNI, refer to [setup development environment](Setup
* From source code

```bat
git clone -b v1.7 https://github.com/Microsoft/nni.git
git clone -b v1.8 https://github.com/Microsoft/nni.git
cd nni
powershell -ExecutionPolicy Bypass -file install.ps1
```
Expand All @@ -41,7 +41,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Clone examples within source code.

```bat
git clone -b v1.7 https://github.com/Microsoft/nni.git
git clone -b v1.8 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
# The short X.Y version
version = ''
# The full version, including alpha/beta/rc tags
release = 'v1.7'
release = 'v1.8'

# -- General configuration ---------------------------------------------------

Expand Down
Binary file modified docs/img/aml_cluster.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
74 changes: 52 additions & 22 deletions examples/model_compress/model_prune_tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,21 +28,31 @@ def get_dataset(dataset_name='mnist'):

def create_model(model_name='naive'):
assert model_name == 'naive'
return tf.keras.Sequential([
tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=500),
tf.keras.layers.ReLU(),
tf.keras.layers.Dense(units=10),
tf.keras.layers.Softmax()
])
return NaiveModel()

class NaiveModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.seq_layers = [
tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Conv2D(filters=20, kernel_size=5),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.ReLU(),
tf.keras.layers.MaxPool2D(pool_size=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=500),
tf.keras.layers.ReLU(),
tf.keras.layers.Dense(units=10),
tf.keras.layers.Softmax()
]

def call(self, x):
for layer in self.seq_layers:
x = layer(x)
return x


def create_pruner(model, pruner_name):
Expand All @@ -55,20 +65,40 @@ def main(args):
model_name = prune_config[args.pruner_name]['model_name']
dataset_name = prune_config[args.pruner_name]['dataset_name']
train_set, test_set = get_dataset(dataset_name)
model = create_model(model_name)

optimizer = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9, decay=1e-4)
model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model = create_model(model_name)

print('start training')
model.fit(train_set[0], train_set[1], batch_size=args.batch_size, epochs=args.pretrain_epochs, validation_data=test_set)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1, momentum=0.9, decay=1e-4)
model.compile(
optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(
train_set[0],
train_set[1],
batch_size=args.batch_size,
epochs=args.pretrain_epochs,
validation_data=test_set
)

print('start model pruning')
optimizer_finetune = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.9, decay=1e-4)
pruner = create_pruner(model, args.pruner_name)
model = pruner.compress()
model.compile(optimizer=optimizer_finetune, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(train_set[0], train_set[1], batch_size=args.batch_size, epochs=args.prune_epochs, validation_data=test_set)
model.compile(
optimizer=optimizer_finetune,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'],
run_eagerly=True # NOTE: Important, model compression does not work in graph mode!
)
model.fit(
train_set[0],
train_set[1],
batch_size=args.batch_size,
epochs=args.prune_epochs,
validation_data=test_set
)


if __name__ == '__main__':
Expand Down
2 changes: 1 addition & 1 deletion examples/model_compress/models/mobilenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def __init__(self, n_class, profile='normal'):
def forward(self, x):
x = self.conv1(x)
x = self.features(x)
x = x.mean(3).mean(2) # global average pooling
x = x.mean([2, 3]) # global average pooling

x = self.classifier(x)
return x
Expand Down
5 changes: 4 additions & 1 deletion examples/model_compress/models/mobilenet_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,10 @@ def __init__(self, n_class=1000, input_size=224, width_mult=1.):

def forward(self, x):
x = self.features(x)
x = x.mean(3).mean(2)
# it's same with .mean(3).mean(2), but
# speedup only suport the mean option
# whose output only have two dimensions
x = x.mean([2, 3])
x = self.classifier(x)
return x

Expand Down
2 changes: 1 addition & 1 deletion examples/nas/benchmarks/nasbench101.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,5 +15,5 @@ fi
echo "Generating database..."
rm -f ${NASBENCHMARK_DIR}/nasbench101.db ${NASBENCHMARK_DIR}/nasbench101.db-journal
mkdir -p ${NASBENCHMARK_DIR}
python -m nni.nas.benchmarks.nasbench101.db_gen nasbench_full.tfrecord
python3 -m nni.nas.benchmarks.nasbench101.db_gen nasbench_full.tfrecord
rm -f nasbench_full.tfrecord
Loading