From d8388957e816117dbe26f15564423ba8a5684f50 Mon Sep 17 00:00:00 2001 From: QuanluZhang Date: Sun, 19 Jan 2020 10:13:55 +0800 Subject: [PATCH 1/9] refactor the index of readthedocs (#1940) --- docs/en_US/FeatureEngineering/Overview.md | 8 +- docs/en_US/Release.md | 2 +- .../TrainingService/SupportTrainingService.md | 1 + docs/en_US/Tutorial/FAQ.md | 2 +- docs/en_US/Tutorial/HowToUseDocker.md | 2 +- .../{Installation.md => InstallationLinux.md} | 46 ++------- docs/en_US/Tutorial/InstallationWin.md | 96 +++++++++++++++++++ docs/en_US/Tutorial/NniOnWindows.md | 49 ---------- docs/en_US/Tutorial/QuickStart.md | 2 +- docs/en_US/advanced.rst | 5 - docs/en_US/assessors.rst | 19 ---- docs/en_US/builtin_assessor.rst | 10 ++ docs/en_US/builtin_tuner.rst | 7 +- docs/en_US/feature_engineering.rst | 2 - docs/en_US/hpo_advanced.rst | 9 ++ docs/en_US/hyperparameter_tune.rst | 15 +++ docs/en_US/index.rst | 19 ++-- docs/en_US/installation.rst | 12 +++ docs/en_US/model_compression.rst | 2 - docs/en_US/nas.rst | 2 - docs/en_US/reference.rst | 13 +-- docs/en_US/tuners.rst | 18 ---- docs/en_US/tutorials.rst | 20 ---- 23 files changed, 180 insertions(+), 181 deletions(-) rename docs/en_US/Tutorial/{Installation.md => InstallationLinux.md} (65%) create mode 100644 docs/en_US/Tutorial/InstallationWin.md delete mode 100644 docs/en_US/Tutorial/NniOnWindows.md delete mode 100644 docs/en_US/advanced.rst delete mode 100644 docs/en_US/assessors.rst create mode 100644 docs/en_US/hpo_advanced.rst create mode 100644 docs/en_US/hyperparameter_tune.rst create mode 100644 docs/en_US/installation.rst delete mode 100644 docs/en_US/tuners.rst delete mode 100644 docs/en_US/tutorials.rst diff --git a/docs/en_US/FeatureEngineering/Overview.md b/docs/en_US/FeatureEngineering/Overview.md index 478693c399..7790cbd8be 100644 --- a/docs/en_US/FeatureEngineering/Overview.md +++ b/docs/en_US/FeatureEngineering/Overview.md @@ -7,7 +7,7 @@ For now, we support the following feature selector: - [GBDTSelector](./GBDTSelector.md) -# How to use? +## How to use? ```python from nni.feature_engineering.gradient_selector import GradientFeatureSelector @@ -30,7 +30,7 @@ print(fgs.get_selected_features(...)) When using the built-in Selector, you first need to `import` a feature selector, and `initialize` it. You could call the function `fit` in the selector to pass the data to the selector. After that, you could use `get_seleteced_features` to get important features. The function parameters in different selectors might be different, so you need to check the docs before using it. -# How to customize? +## How to customize? NNI provides _state-of-the-art_ feature selector algorithm in the builtin-selector. NNI also supports to build a feature selector by yourself. @@ -239,7 +239,7 @@ print("Pipeline Score: ", pipeline.score(X_train, y_train)) ``` -# Benchmark +## Benchmark `Baseline` means without any feature selection, we directly pass the data to LogisticRegression. For this benchmark, we only use 10% data from the train as test data. For the GradientFeatureSelector, we only take the top20 features. The metric is the mean accuracy on the given test data and labels. @@ -257,7 +257,7 @@ The dataset of benchmark could be download in [here](https://www.csie.ntu.edu.tw The code could be refenrence `/examples/feature_engineering/gradient_feature_selector/benchmark_test.py`. -## **Reference and Feedback** +## Reference and Feedback * To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub; * To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub; * To know more about [Neural Architecture Search with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md); diff --git a/docs/en_US/Release.md b/docs/en_US/Release.md index b2d57e8b00..28f1a6e003 100644 --- a/docs/en_US/Release.md +++ b/docs/en_US/Release.md @@ -213,7 +213,7 @@ ### Major Features -* [Support NNI on Windows](Tutorial/NniOnWindows.md) +* [Support NNI on Windows](Tutorial/InstallationWin.md) * NNI running on windows for local mode * [New advisor: BOHB](Tuner/BohbAdvisor.md) * Support a new advisor BOHB, which is a robust and efficient hyperparameter tuning algorithm, combines the advantages of Bayesian optimization and Hyperband diff --git a/docs/en_US/TrainingService/SupportTrainingService.md b/docs/en_US/TrainingService/SupportTrainingService.md index dfb0df3fe8..56c4253aa4 100644 --- a/docs/en_US/TrainingService/SupportTrainingService.md +++ b/docs/en_US/TrainingService/SupportTrainingService.md @@ -4,6 +4,7 @@ NNI TrainingService provides the training platform for running NNI trial jobs. N NNI not only provides few built-in training service options, but also provides a method for customers to build their own training service easily. ## Built-in TrainingService + |TrainingService|Brief Introduction| |---|---| |[__Local__](./LocalMode.md)|NNI supports running an experiment on local machine, called local mode. Local mode means that NNI will run the trial jobs and nniManager process in same machine, and support gpu schedule function for trial jobs.| diff --git a/docs/en_US/Tutorial/FAQ.md b/docs/en_US/Tutorial/FAQ.md index 16299e5b20..0d0c1d2fed 100644 --- a/docs/en_US/Tutorial/FAQ.md +++ b/docs/en_US/Tutorial/FAQ.md @@ -45,7 +45,7 @@ Probably it's a problem with your network config. Here is a checklist. ### NNI on Windows problems -Please refer to [NNI on Windows](NniOnWindows.md) +Please refer to [NNI on Windows](InstallationWin.md#FAQ) ### More FAQ issues diff --git a/docs/en_US/Tutorial/HowToUseDocker.md b/docs/en_US/Tutorial/HowToUseDocker.md index c081e38dc7..d480094329 100644 --- a/docs/en_US/Tutorial/HowToUseDocker.md +++ b/docs/en_US/Tutorial/HowToUseDocker.md @@ -35,7 +35,7 @@ Note: If you start a docker image using NNI's offical image `msranni/nni`, you could directly start NNI experiments by using `nnictl` command. Our offical image has NNI's running environment and basic python and deep learning frameworks environment. -If you start your own docker image, you may need to install NNI package first, please [refer](Installation.md). +If you start your own docker image, you may need to install NNI package first, please refer to [NNI installation](InstallationLinux.md). If you want to run NNI's offical examples, you may need to clone NNI repo in github using ``` diff --git a/docs/en_US/Tutorial/Installation.md b/docs/en_US/Tutorial/InstallationLinux.md similarity index 65% rename from docs/en_US/Tutorial/Installation.md rename to docs/en_US/Tutorial/InstallationLinux.md index e7711bd2d0..f5a562fe8c 100644 --- a/docs/en_US/Tutorial/Installation.md +++ b/docs/en_US/Tutorial/InstallationLinux.md @@ -1,10 +1,10 @@ -# Installation of NNI +# Installation on Linux & Mac -Currently we support installation on Linux, Mac and Windows. +## Installation -## **Installation on Linux & Mac** +Installation on Linux and Mac follow the same instruction below. -* __Install NNI through pip__ +### __Install NNI through pip__ Prerequisite: `python >= 3.5` @@ -12,7 +12,7 @@ Currently we support installation on Linux, Mac and Windows. python3 -m pip install --upgrade nni ``` -* __Install NNI through source code__ +### __Install NNI through source code__ Prerequisite: `python >=3.5`, `git`, `wget` @@ -22,33 +22,12 @@ Currently we support installation on Linux, Mac and Windows. ./install.sh ``` -* __Install NNI in docker image__ +### __Install NNI in docker image__ You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`. -## **Installation on Windows** - Anaconda or Miniconda is highly recommended. - -* __Install NNI through pip__ - - Prerequisite: `python(64-bit) >= 3.5` - - ```bash - python -m pip install --upgrade nni - ``` - -* __Install NNI through source code__ - - Prerequisite: `python >=3.5`, `git`, `PowerShell`. - - ```bash - git clone -b v0.8 https://github.com/Microsoft/nni.git - cd nni - powershell -ExecutionPolicy Bypass -file install.ps1 - ``` - -## **System requirements** +## System requirements Below are the minimum system requirements for NNI on Linux. Due to potential programming changes, the minimum system requirements for NNI may change over time. @@ -74,17 +53,6 @@ Below are the minimum system requirements for NNI on macOS. Due to potential pro |**Internet**|Boardband internet connection| |**Resolution**|1024 x 768 minimum display resolution| -Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time. - -||Minimum Requirements|Recommended Specifications| -|---|---|---| -|**Operating System**|Windows 10|Windows 10| -|**CPU**|Intel® Core™ i3 or AMD Phenom™ X3 8650|Intel® Core™ i5 or AMD Phenom™ II X3 or better| -|**GPU**|NVIDIA® GeForce® GTX 460|NVIDIA® GeForce® GTX 660 or better| -|**Memory**|4 GB RAM|6 GB RAM| -|**Storage**|30 GB available hare drive space| -|**Internet**|Boardband internet connection| -|**Resolution**|1024 x 768 minimum display resolution| ## Further reading diff --git a/docs/en_US/Tutorial/InstallationWin.md b/docs/en_US/Tutorial/InstallationWin.md new file mode 100644 index 0000000000..2531f5b3ad --- /dev/null +++ b/docs/en_US/Tutorial/InstallationWin.md @@ -0,0 +1,96 @@ +# Installation on Windows + +## Installation + +Anaconda or Miniconda is highly recommended. + +### __Install NNI through pip__ + + Prerequisite: `python(64-bit) >= 3.5` + + ```bash + python -m pip install --upgrade nni + ``` + +### __Install NNI through source code__ + + Prerequisite: `python >=3.5`, `git`, `PowerShell`. + + ```bash + git clone -b v0.8 https://github.com/Microsoft/nni.git + cd nni + powershell -ExecutionPolicy Bypass -file install.ps1 + ``` + +## System requirements + +Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time. + +||Minimum Requirements|Recommended Specifications| +|---|---|---| +|**Operating System**|Windows 10|Windows 10| +|**CPU**|Intel® Core™ i3 or AMD Phenom™ X3 8650|Intel® Core™ i5 or AMD Phenom™ II X3 or better| +|**GPU**|NVIDIA® GeForce® GTX 460|NVIDIA® GeForce® GTX 660 or better| +|**Memory**|4 GB RAM|6 GB RAM| +|**Storage**|30 GB available hare drive space| +|**Internet**|Boardband internet connection| +|**Resolution**|1024 x 768 minimum display resolution| + + +## Run NNI examples on Windows + +When installation is done, use the **config_windows.yml** configuration to start an experiment for validation. + +```bash +nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml +``` + +For other examples you need to change trial command `python3` into `python` in each example YAML. + +## FAQ + +### simplejson failed when installing NNI + +Make sure C++ 14.0 compiler installed. +>building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified + +### Trial failed with missing DLL in command line or PowerShell + +This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it. +>ImportError: DLL load failed + +### Trial failed on webUI + +Please check the trial log file stderr for more details. + +If there is a stderr file, please check out. Two possible cases are as follows: + +* forget to change the trial command `python3` into `python` in each experiment YAML. +* forget to install experiment dependencies such as TensorFlow, Keras and so on. + +### Fail to use BOHB on Windows +Make sure C++ 14.0 compiler installed then try to run `nnictl package install --name=BOHB` to install the dependencies. + +### Not supported tuner on Windows +SMAC is not supported currently, the specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483). + +### Use a Windows server as a remote worker +Currently you can't. + +Note: + +* If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md) + + +## Further reading + +* [Overview](../Overview.md) +* [Use command line tool nnictl](Nnictl.md) +* [Use NNIBoard](WebUI.md) +* [Define search space](SearchSpaceSpec.md) +* [Config an experiment](ExperimentConfig.md) +* [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md) +* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md) +* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md) +* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md) +* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md) \ No newline at end of file diff --git a/docs/en_US/Tutorial/NniOnWindows.md b/docs/en_US/Tutorial/NniOnWindows.md deleted file mode 100644 index 6e2335dd8c..0000000000 --- a/docs/en_US/Tutorial/NniOnWindows.md +++ /dev/null @@ -1,49 +0,0 @@ -# NNI on Windows (experimental feature) - -Running NNI on Windows is an experimental feature. Windows 10.1809 is well tested and recommended. - -## **Installation on Windows** - - please refer to [Installation](Installation.md) for more details. - -When these things are done, use the **config_windows.yml** configuration to start an experiment for validation. - -```bash -nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml -``` - -For other examples you need to change trial command `python3` into `python` in each example YAML. - -## **FAQ** - -### simplejson failed when installing NNI - -Make sure C++ 14.0 compiler installed. ->building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified - -### Trial failed with missing DLL in command line or PowerShell - -This error caused by missing LIBIFCOREMD.DLL and LIBMMD.DLL and fail to install SciPy. Using Anaconda or Miniconda with Python(64-bit) can solve it. ->ImportError: DLL load failed - -### Trial failed on webUI - -Please check the trial log file stderr for more details. - -If there is a stderr file, please check out. Two possible cases are as follows: - -* forget to change the trial command `python3` into `python` in each experiment YAML. -* forget to install experiment dependencies such as TensorFlow, Keras and so on. - -### Fail to use BOHB on Windows -Make sure C++ 14.0 compiler installed then try to run `nnictl package install --name=BOHB` to install the dependencies. - -### Not supported tuner on Windows -SMAC is not supported currently, the specific reason can be referred to this [GitHub issue](https://github.com/automl/SMAC3/issues/483). - -### Use a Windows server as a remote worker -Currently you can't. - -Note: - -* If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md) diff --git a/docs/en_US/Tutorial/QuickStart.md b/docs/en_US/Tutorial/QuickStart.md index f48550cbfc..c460638358 100644 --- a/docs/en_US/Tutorial/QuickStart.md +++ b/docs/en_US/Tutorial/QuickStart.md @@ -19,7 +19,7 @@ Note: * For Linux and MacOS `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges. * If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md) -* For the `system requirements` of NNI, please refer to [Install NNI](Installation.md) +* For the `system requirements` of NNI, please refer to [Install NNI on Linux&Mac](InstallationLinux.md) or [Windows](InstallationWin.md) ## "Hello World" example on MNIST diff --git a/docs/en_US/advanced.rst b/docs/en_US/advanced.rst deleted file mode 100644 index e38f634969..0000000000 --- a/docs/en_US/advanced.rst +++ /dev/null @@ -1,5 +0,0 @@ -Advanced Features -===================== - -.. toctree:: - MultiPhase<./AdvancedFeature/MultiPhase> diff --git a/docs/en_US/assessors.rst b/docs/en_US/assessors.rst deleted file mode 100644 index a554959876..0000000000 --- a/docs/en_US/assessors.rst +++ /dev/null @@ -1,19 +0,0 @@ -Assessors -============== -In order to save our computing resources, NNI supports an early stop policy and creates **Assessor** to finish this job. - -Assessor receives the intermediate result from Trial and decides whether the Trial should be killed by specific algorithm. Once the Trial experiment meets the early stop conditions(which means assessor is pessimistic about the final results), the assessor will kill the trial and the status of trial will be `"EARLY_STOPPED"`. - -Here is an experimental result of MNIST after using 'Curvefitting' Assessor in 'maximize' mode, you can see that assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use assessor, we may get better hyperparameters under the same computing resources. - -*Implemented code directory: config_assessor.yml * - -.. image:: ../img/Assessor.png - -Like Tuners, users can either use built-in Assessors, or customize an Assessor on their own. Please refer to the following tutorials for detail: - -.. toctree:: - :maxdepth: 2 - - Builtin Assessors - Customized Assessors diff --git a/docs/en_US/builtin_assessor.rst b/docs/en_US/builtin_assessor.rst index a109a3f533..c7307a9ef9 100644 --- a/docs/en_US/builtin_assessor.rst +++ b/docs/en_US/builtin_assessor.rst @@ -1,6 +1,16 @@ Builtin-Assessors ================= +In order to save our computing resources, NNI supports an early stop policy and creates **Assessor** to finish this job. + +Assessor receives the intermediate result from Trial and decides whether the Trial should be killed by specific algorithm. Once the Trial experiment meets the early stop conditions(which means assessor is pessimistic about the final results), the assessor will kill the trial and the status of trial will be `"EARLY_STOPPED"`. + +Here is an experimental result of MNIST after using 'Curvefitting' Assessor in 'maximize' mode, you can see that assessor successfully **early stopped** many trials with bad hyperparameters in advance. If you use assessor, we may get better hyperparameters under the same computing resources. + +*Implemented code directory: config_assessor.yml * + +.. image:: ../img/Assessor.png + .. toctree:: :maxdepth: 1 diff --git a/docs/en_US/builtin_tuner.rst b/docs/en_US/builtin_tuner.rst index de66531354..f8eb7546cd 100644 --- a/docs/en_US/builtin_tuner.rst +++ b/docs/en_US/builtin_tuner.rst @@ -1,5 +1,10 @@ Builtin-Tuners -================== +============== + +NNI provides an easy way to adopt an approach to set up parameter tuning algorithms, we call them **Tuner**. + +Tuner receives metrics from `Trial` to evaluate the performance of a specific parameters/architecture configures. And tuner sends next hyper-parameter or architecture configure to Trial. + .. toctree:: :maxdepth: 1 diff --git a/docs/en_US/feature_engineering.rst b/docs/en_US/feature_engineering.rst index bfbcb6f490..6c804ad50e 100644 --- a/docs/en_US/feature_engineering.rst +++ b/docs/en_US/feature_engineering.rst @@ -8,8 +8,6 @@ We'd like to invite you to use, feedback and even contribute. For details, please refer to the following tutorials: .. toctree:: - :maxdepth: 2 - Overview GradientFeatureSelector GBDTSelector diff --git a/docs/en_US/hpo_advanced.rst b/docs/en_US/hpo_advanced.rst new file mode 100644 index 0000000000..0befd608fc --- /dev/null +++ b/docs/en_US/hpo_advanced.rst @@ -0,0 +1,9 @@ +Advanced Features +================= + +.. toctree:: + Enable Multi-phase + Write a New Tuner + Write a New Assessor + Write a New Advisor + Write a New Training Service diff --git a/docs/en_US/hyperparameter_tune.rst b/docs/en_US/hyperparameter_tune.rst new file mode 100644 index 0000000000..f7e55f89ab --- /dev/null +++ b/docs/en_US/hyperparameter_tune.rst @@ -0,0 +1,15 @@ +###################### +Hyper-parameter Tuning +###################### + +.. toctree:: + :maxdepth: 2 + + Write Trial + Tuners + Assessors + Training Platform + Examples + WebUI + How to Debug + Advanced \ No newline at end of file diff --git a/docs/en_US/index.rst b/docs/en_US/index.rst index 54d84c6e38..2526188371 100644 --- a/docs/en_US/index.rst +++ b/docs/en_US/index.rst @@ -12,11 +12,14 @@ Contents :titlesonly: Overview - QuickStart - Tutorials - Examples - Reference - FAQ - Contribution - Changelog - Community Sharings + Installation + QuickStart + Hyper-parameter Tuning + Neural Architecture Search + Model Compression + Feature Engineering + References + Community Sharings + FAQ + How to Contribution + Changelog \ No newline at end of file diff --git a/docs/en_US/installation.rst b/docs/en_US/installation.rst new file mode 100644 index 0000000000..2606ceaa05 --- /dev/null +++ b/docs/en_US/installation.rst @@ -0,0 +1,12 @@ +############ +Installation +############ + +Currently we support installation on Linux, Mac and Windows. And also allow you to use docker. + +.. toctree:: + :maxdepth: 2 + + Linux & Mac + Windows + Use Docker \ No newline at end of file diff --git a/docs/en_US/model_compression.rst b/docs/en_US/model_compression.rst index 36cff91193..61caf4d8d8 100644 --- a/docs/en_US/model_compression.rst +++ b/docs/en_US/model_compression.rst @@ -13,8 +13,6 @@ On the other hand, users could easily customize their new compression algorithms For details, please refer to the following tutorials: .. toctree:: - :maxdepth: 2 - Overview Level Pruner AGP Pruner diff --git a/docs/en_US/nas.rst b/docs/en_US/nas.rst index a5bd8f6b8f..5a267cc2ac 100644 --- a/docs/en_US/nas.rst +++ b/docs/en_US/nas.rst @@ -16,8 +16,6 @@ to accelerate innovations on NAS, and apply state-of-art algorithms on real worl For details, please refer to the following tutorials: .. toctree:: - :maxdepth: 2 - Overview NAS Interface ENAS diff --git a/docs/en_US/reference.rst b/docs/en_US/reference.rst index ee300086f5..df2306eb70 100644 --- a/docs/en_US/reference.rst +++ b/docs/en_US/reference.rst @@ -2,12 +2,9 @@ References ================== .. toctree:: - :maxdepth: 3 - - Command Line - Python API - Annotation - Configuration + nnictl Commands + Experiment Configuration Search Space - TrainingService - Framework Library + NNI Annotation + SDK API References + Supported Framework Library diff --git a/docs/en_US/tuners.rst b/docs/en_US/tuners.rst deleted file mode 100644 index 98c019b8a0..0000000000 --- a/docs/en_US/tuners.rst +++ /dev/null @@ -1,18 +0,0 @@ -################# -Tuners -################# - -NNI provides an easy way to adopt an approach to set up parameter tuning algorithms, we call them **Tuner**. - -Tuner receives metrics from `Trial` to evaluate the performance of a specific parameters/architecture configures. And tuner sends next hyper-parameter or architecture configure to Trial. - -In NNI, we support two approaches to set the tuner: first is directly use builtin tuner provided by nni sdk, second is customize a tuner file by yourself. We also have Advisor that combines the functinality of Tuner & Assessor. - -For details, please refer to the following tutorials: - -.. toctree:: - :maxdepth: 2 - - Builtin Tuners - Customized Tuners - Customized Advisor diff --git a/docs/en_US/tutorials.rst b/docs/en_US/tutorials.rst deleted file mode 100644 index 7d3721dcaa..0000000000 --- a/docs/en_US/tutorials.rst +++ /dev/null @@ -1,20 +0,0 @@ -###################### -Tutorials -###################### - -.. toctree:: - :maxdepth: 2 - - Installation - Write Trial - Tuners - Assessors - NAS (Beta) - Model Compression (Beta) - Feature Engineering (Beta) - WebUI - Training Platform - How to use docker - advanced - Debug HowTo - NNI on Windows \ No newline at end of file From d2c610a1ddf464da9d83f1a6a7eec61d0dabba1e Mon Sep 17 00:00:00 2001 From: Yuge Zhang Date: Sat, 8 Feb 2020 11:41:45 +0800 Subject: [PATCH 2/9] Update guide and reference of NAS (#1972) --- docs/en_US/NAS/CDARTS.md | 4 - docs/en_US/NAS/DARTS.md | 6 +- docs/en_US/NAS/ENAS.md | 4 - docs/en_US/NAS/NasGuide.md | 311 ++++++++++++++++++ docs/en_US/NAS/NasInterface.md | 173 ---------- docs/en_US/NAS/NasReference.md | 109 ++++++ docs/en_US/NAS/Overview.md | 28 +- docs/en_US/NAS/SPOS.md | 6 - docs/en_US/nas.rst | 3 +- docs/img/nas_abstract_illustration.png | Bin 0 -> 36993 bytes src/sdk/pynni/nni/nas/pytorch/base_mutator.py | 28 +- src/sdk/pynni/nni/nas/pytorch/base_trainer.py | 17 + src/sdk/pynni/nni/nas/pytorch/callbacks.py | 65 ++++ .../pynni/nni/nas/pytorch/cdarts/mutator.py | 17 +- .../pynni/nni/nas/pytorch/cdarts/trainer.py | 122 +++---- .../nni/nas/pytorch/classic_nas/mutator.py | 43 ++- .../pynni/nni/nas/pytorch/darts/trainer.py | 72 ++-- src/sdk/pynni/nni/nas/pytorch/enas/mutator.py | 58 ++-- src/sdk/pynni/nni/nas/pytorch/enas/trainer.py | 104 +++--- src/sdk/pynni/nni/nas/pytorch/fixed.py | 31 +- src/sdk/pynni/nni/nas/pytorch/mutables.py | 148 +++++++-- src/sdk/pynni/nni/nas/pytorch/mutator.py | 25 +- .../pynni/nni/nas/pytorch/random/mutator.py | 11 + .../pynni/nni/nas/pytorch/spos/evolution.py | 40 +-- src/sdk/pynni/nni/nas/pytorch/spos/mutator.py | 38 ++- src/sdk/pynni/nni/nas/pytorch/spos/trainer.py | 61 ++-- src/sdk/pynni/nni/nas/pytorch/trainer.py | 119 +++++-- src/sdk/pynni/nni/nas/pytorch/utils.py | 62 +++- 28 files changed, 1119 insertions(+), 586 deletions(-) create mode 100644 docs/en_US/NAS/NasGuide.md delete mode 100644 docs/en_US/NAS/NasInterface.md create mode 100644 docs/en_US/NAS/NasReference.md create mode 100644 docs/img/nas_abstract_illustration.png diff --git a/docs/en_US/NAS/CDARTS.md b/docs/en_US/NAS/CDARTS.md index 4242040f08..8a1f9d4d9f 100644 --- a/docs/en_US/NAS/CDARTS.md +++ b/docs/en_US/NAS/CDARTS.md @@ -46,16 +46,12 @@ bash run_retrain_cifar.sh .. autoclass:: nni.nas.pytorch.cdarts.CdartsTrainer :members: - .. automethod:: __init__ - .. autoclass:: nni.nas.pytorch.cdarts.RegularizedDartsMutator :members: .. autoclass:: nni.nas.pytorch.cdarts.DartsDiscreteMutator :members: - .. automethod:: __init__ - .. autoclass:: nni.nas.pytorch.cdarts.RegularizedMutatorParallel :members: ``` diff --git a/docs/en_US/NAS/DARTS.md b/docs/en_US/NAS/DARTS.md index d742a8ef6f..8c1384cd25 100644 --- a/docs/en_US/NAS/DARTS.md +++ b/docs/en_US/NAS/DARTS.md @@ -43,8 +43,10 @@ python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json .. autoclass:: nni.nas.pytorch.darts.DartsTrainer :members: - .. automethod:: __init__ - .. autoclass:: nni.nas.pytorch.darts.DartsMutator :members: ``` + +## Limitations + +* DARTS doesn't support DataParallel and needs to be customized in order to support DistributedDataParallel. diff --git a/docs/en_US/NAS/ENAS.md b/docs/en_US/NAS/ENAS.md index ad389f28b9..e6bae1ab9a 100644 --- a/docs/en_US/NAS/ENAS.md +++ b/docs/en_US/NAS/ENAS.md @@ -37,10 +37,6 @@ python3 search.py -h .. autoclass:: nni.nas.pytorch.enas.EnasTrainer :members: - .. automethod:: __init__ - .. autoclass:: nni.nas.pytorch.enas.EnasMutator :members: - - .. automethod:: __init__ ``` diff --git a/docs/en_US/NAS/NasGuide.md b/docs/en_US/NAS/NasGuide.md new file mode 100644 index 0000000000..b9c702d223 --- /dev/null +++ b/docs/en_US/NAS/NasGuide.md @@ -0,0 +1,311 @@ +# Guide: Using NAS on NNI + +```eval_rst +.. contents:: + +.. Note:: The APIs are in an experimental stage. The current programing interface is subject to change. +``` + +![](../../img/nas_abstract_illustration.png) + +Modern Neural Architecture Search (NAS) methods usually incorporate [three dimensions][1]: search space, search strategy, and performance estimation strategy. Search space often contains a limited neural network architectures to explore, while search strategy samples architectures from search space, gets estimations of their performance, and evolves itself. Ideally, search strategy should find the best architecture in the search space and report it to users. After users obtain such "best architecture", many methods use a "retrain step", which trains the network with the same pipeline as any traditional model. + +## Implement a Search Space + +Assuming now we've got a baseline model, what should we do to be empowered with NAS? Take [MNIST on PyTorch](https://github.com/pytorch/examples/blob/master/mnist/main.py) as an example, the code might look like this: + +```python +from nni.nas.pytorch import mutables + +class Net(nn.Module): + def __init__(self): + super(Net, self).__init__() + self.conv1 = mutables.LayerChoice([ + nn.Conv2d(1, 32, 3, 1), + nn.Conv2d(1, 32, 5, 3) + ]) # try 3x3 kernel and 5x5 kernel + self.conv2 = nn.Conv2d(32, 64, 3, 1) + self.dropout1 = nn.Dropout2d(0.25) + self.dropout2 = nn.Dropout2d(0.5) + self.fc1 = nn.Linear(9216, 128) + self.fc2 = nn.Linear(128, 10) + + def forward(self, x): + x = self.conv1(x) + x = F.relu(x) + # ... same as original ... + return output +``` + +The example above adds an option of choosing conv5x5 at conv1. The modification is as simple as declaring a `LayerChoice` with original conv3x3 and a new conv5x5 as its parameter. That's it! You don't have to modify the forward function in anyway. You can imagine conv1 as any another module without NAS. + +So how about the possibilities of connections? This can be done by `InputChoice`. To allow for a skipconnection on an MNIST example, we add another layer called conv3. In the following example, a possible connection from conv2 is added to the output of conv3. + +```python +from nni.nas.pytorch import mutables + +class Net(nn.Module): + def __init__(self): + # ... same ... + self.conv2 = nn.Conv2d(32, 64, 3, 1) + self.conv3 = nn.Conv2d(64, 64, 1, 1) + # declaring that there is exactly one candidate to choose from + # search strategy will choose one or None + self.skipcon = mutables.InputChoice(n_candidates=1) + # ... same ... + + def forward(self, x): + x = self.conv1(x) + x = F.relu(x) + x = self.conv2(x) + x0 = self.skipcon([x]) # choose one or none from [x] + x = self.conv3(x) + if x0 is not None: # skipconnection is open + x += x0 + x = F.max_pool2d(x, 2) + # ... same ... + return output +``` + +Input choice can be thought of as a callable module that receives a list of tensors and output the concatenation/sum/mean of some of them (sum by default), or `None` if none is selected. Like layer choices, input choices should be **initialized in `__init__` and called in `forward`**. We will see later that this is to allow search algorithms to identify these choices, and do necessary preparation. + +`LayerChoice` and `InputChoice` are both **mutables**. Mutable means "changeable". As opposed to traditional deep learning layers/modules which have fixed operation type once defined, models with mutables are essentially a series of possible models. + +Users can specify a **key** for each mutable. By default NNI will assign one for you that is globally unique, but in case users want to share choices (for example, there are two `LayerChoice` with the same candidate operations, and you want them to have the same choice, i.e., if first one chooses the i-th op, the second one also chooses the i-th op), they can give them the same key. The key marks the identity for this choice, and will be used in dumped checkpoint. So if you want to increase the readability of your exported architecture, manually assigning keys to each mutable would be a good idea. For advanced usage on mutables, see [Mutables](./NasReference.md#mutables). + +## Use a Search Algorithm + +Different in how the search space is explored and trials are spawned, there are at least two different ways users can do search. One runs NAS distributedly, which can be as naive as enumerating all the architectures and training each one from scratch, or leveraging more advanced technique, such as [SMASH][8], [ENAS][2], [DARTS][1], [FBNet][3], [ProxylessNAS][4], [SPOS][5], [Single-Path NAS][6], [Understanding One-shot][7] and [GDAS][9]. Since training many different architectures are known to be expensive, another family of methods, called one-shot NAS, builds a supernet containing every candidate in the search space as its subnetwork, and in each step a subnetwork or combination of several subnetworks is trained. + +Currently, several one-shot NAS methods have been supported on NNI. For example, `DartsTrainer` which uses SGD to train architecture weights and model weights iteratively, `ENASTrainer` which [uses a controller to train the model][2]. New and more efficient NAS trainers keep emerging in research community. + +### One-Shot NAS + +Each one-shot NAS implements a trainer, which users can find detailed usages in the description of each algorithm. Here is a simple example, demonstrating how users can use `EnasTrainer`. + +```python +# this is exactly same as traditional model training +model = Net() +dataset_train = CIFAR10(root="./data", train=True, download=True, transform=train_transform) +dataset_valid = CIFAR10(root="./data", train=False, download=True, transform=valid_transform) +criterion = nn.CrossEntropyLoss() +optimizer = torch.optim.SGD(model.parameters(), 0.05, momentum=0.9, weight_decay=1.0E-4) + +# use NAS here +def top1_accuracy(output, target): + # this is the function that computes the reward, as required by ENAS algorithm + batch_size = target.size(0) + _, predicted = torch.max(output.data, 1) + return (predicted == target).sum().item() / batch_size + +def metrics_fn(output, target): + # metrics function receives output and target and computes a dict of metrics + return {"acc1": reward_accuracy(output, target)} + +from nni.nas.pytorch import enas +trainer = enas.EnasTrainer(model, + loss=criterion, + metrics=metrics_fn, + reward_function=top1_accuracy, + optimizer=optimizer, + batch_size=128 + num_epochs=10, # 10 epochs + dataset_train=dataset_train, + dataset_valid=dataset_valid, + log_frequency=10) # print log every 10 steps +trainer.train() # training +trainer.export(file="model_dir/final_architecture.json") # export the final architecture to file +``` + +Users can directly run their training file by `python3 train.py`, without `nnictl`. After training, users could export the best one of the found models through `trainer.export()`. + +Normally, the trainer exposes a few arguments that you can customize, for example, loss function, metrics function, optimizer, and datasets. These should satisfy the needs from most usages, and we do our best to make sure our built-in trainers work on as many models, tasks and datasets as possible. But there is no guarantee. For example, some trainers have assumption that the task has to be a classification task; some trainers might have a different definition of "epoch" (e.g., an ENAS epoch = some child steps + some controller steps); most trainers do not have support for distributed training: they won't wrap your model with `DataParallel` or `DistributedDataParallel` to do that. So after a few tryouts, if you want to actually use the trainers on your very customized applications, you might very soon need to [customize your trainer](#extend-the-ability-of-one-shot-trainers). + +### Distributed NAS + +Neural architecture search is originally executed by running each child model independently as a trial job. We also support this searching approach, and it naturally fits in NNI hyper-parameter tuning framework, where tuner generates child model for next trial and trials run in training service. + +To use this mode, there is no need to change the search space expressed with NNI NAS API (i.e., `LayerChoice`, `InputChoice`, `MutableScope`). After the model is initialized, apply the function `get_and_apply_next_architecture` on the model. One-shot NAS trainers are not used in this mode. Here is a simple example: + +```python +model = Net() + +# get the chosen architecture from tuner and apply it on model +get_and_apply_next_architecture(model) +train(model) # your code for training the model +acc = test(model) # test the trained model +nni.report_final_result(acc) # report the performance of the chosen architecture +``` + +The search space should be generated and sent to tuner. As with NNI NAS API the search space is embedded in user code, users could use "[nnictl ss_gen](../Tutorial/Nnictl.md)" to generate search space file. Then, put the path of the generated search space in the field `searchSpacePath` of `config.yml`. The other fields in `config.yml` can be filled by referring [this tutorial](../Tutorial/QuickStart.md). + +You could use [NNI tuners](../Tuner/BuiltinTuner.md) to do the search. Currently, only PPO Tuner supports NAS search space. + +We support standalone mode for easy debugging, where you could directly run the trial command without launching an NNI experiment. This is for checking whether your trial code can correctly run. The first candidate(s) are chosen for `LayerChoice` and `InputChoice` in this standalone mode. + +A complete example can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/classic_nas/config_nas.yml). + +### Retrain with Exported Architecture + +After the searching phase, it's time to train the architecture found. Unlike many open-source NAS algorithms who write a whole new model specifically for retraining. We found that searching model and retraining model are usual very similar, and therefore you can construct your final model with the exact model code. For example + +```python +model = Net() +apply_fixed_architecture(model, "model_dir/final_architecture.json") +``` + +The JSON is simply a mapping from mutable keys to one-hot or multi-hot representation of choices. For example + +```json +{ + "LayerChoice1": [false, true, false, false], + "InputChoice2": [true, true, false] +} +``` + +After applying, the model is then fixed and ready for a final training. The model works as a single model, although it might contain more parameters than expected. This comes with pros and cons. The good side is, you can directly load the checkpoint dumped from supernet during search phase and start retrain from there. However, this is also a model with redundant parameters, which may cause problems when trying to count the number of parameters in model. For deeper reasons and possible workaround, see [Trainers](./NasReference.md#retrain). + +Also refer to [DARTS](./DARTS.md) for example code of retraining. + +## Customize a Search Algorithm + +### Extend the Ability of One-Shot Trainers + +Users might want to do multiple things if they are using the trainers on real tasks, for example, distributed training, half-precision training, logging periodically, writing tensorboard, dumping checkpoints and so on. As mentioned previously, some trainers do have support for some of the items listed above; others might not. Generally, there are two recommended ways to add anything you want to an existing trainer: inherit an existing trainer and override, or copy an existing trainer and modify. + +Either way, you are walking into the scope of implementing a new trainer. Basically, implementing a one-shot trainer is no different from any traditional deep learning trainer, except that a new concept called mutator will reveal itself. So that the implementation will be different in at least two places: + +* Initialization + +```python +model = Model() +mutator = MyMutator(model) +``` + +* Training + +```python +for _ in range(epochs): + for x, y in data_loader: + mutator.reset() # reset all the choices in model + out = model(x) # like traditional model + loss = criterion(out, y) + loss.backward() + # no difference below +``` + +To demonstrate what mutators are for, we need to know how one-shot NAS normally works. Usually, one-shot NAS "co-optimize model weights and architecture weights". It repeatedly: sample an architecture or combination of several architectures from the supernet, train the chosen architectures like traditional deep learning model, update the trained parameters to the supernet, and use the metrics or loss as some signal to guide the architecture sampler. The mutator, is the architecture sampler here, often defined to be another deep-learning model. Therefore, you can treat it as any model, by defining parameters in it and optimizing it with optimizers. One mutator is initialized with exactly one model. Once a mutator is binded to a model, it cannot be rebinded to another model. + +`mutator.reset()` is the core step. That's where all the choices in the model are finalized. The reset result will be always effective, until the next reset flushes the data. After the reset, the model can be seen as a traditional model to do forward-pass and backward-pass. + +Finally, mutators provide a method called `mutator.export()` that export a dict with architectures to the model. Note that currently this dict this a mapping from keys of mutables to tensors of selection. So in order to dump to json, users need to convert the tensors explicitly into python list. + +Meanwhile, NNI provides some useful tools so that users can implement trainers more easily. See [Trainers](./NasReference.md#trainers) for details. + +### Implement New Mutators + +To start with, here is the pseudo-code that demonstrates what happens on `mutator.reset()` and `mutator.export()`. + +```python +def reset(self): + self.apply_on_model(self.sample_search()) +``` + +```python +def export(self): + return self.sample_final() +``` + +On reset, a new architecture is sampled with `sample_search()` and applied on the model. Then the model is trained for one or more steps in search phase. On export, a new architecture is sampled with `sample_final()` and **do nothing to the model**. This is either for checkpoint or exporting the final architecture. + +The requirements of return values of `sample_search()` and `sample_final()` are the same: a mapping from mutable keys to tensors. The tensor can be either a BoolTensor (true for selected, false for negative), or a FloatTensor which applies weight on each candidate. The selected branches will then be computed (in `LayerChoice`, modules will be called; in `InputChoice`, it's just tensors themselves), and reduce with the reduction operation specified in the choices. For most algorithms only worrying about the former part, here is an example of your mutator implementation. + +```python +class RandomMutator(Mutator): + def __init__(self, model): + super().__init__(model) # don't forget to call super + # do something else + + def sample_search(self): + result = dict() + for mutable in self.mutables: # this is all the mutable modules in user model + # mutables share the same key will be de-duplicated + if isinstance(mutable, LayerChoice): + # decided that this mutable should choose `gen_index` + gen_index = np.random.randint(mutable.length) + result[mutable.key] = torch.tensor([i == gen_index for i in range(mutable.length)], + dtype=torch.bool) + elif isinstance(mutable, InputChoice): + if mutable.n_chosen is None: # n_chosen is None, then choose any number + result[mutable.key] = torch.randint(high=2, size=(mutable.n_candidates,)).view(-1).bool() + # else do something else + return result + + def sample_final(self): + return self.sample_search() # use the same logic here. you can do something different +``` + +The complete example of random mutator can be found [here](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/nas/pytorch/random/mutator.py). + +For advanced usages, e.g., users want to manipulate the way modules in `LayerChoice` are executed, they can inherit `BaseMutator`, and overwrite `on_forward_layer_choice` and `on_forward_input_choice`, which are the callback implementation of `LayerChoice` and `InputChoice` respectively. Users can still use property `mutables` to get all `LayerChoice` and `InputChoice` in the model code. For details, please refer to [reference](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch) here to learn more. + +```eval_rst +.. tip:: + A useful application of random mutator is for debugging. Use + + .. code-block:: python + + mutator = RandomMutator(model) + mutator.reset() + + will immediately set one possible candidate in the search space as the active one. +``` + +### Implemented a Distributed NAS Tuner + +Before learning how to write a one-shot NAS tuner, users should first learn how to write a general tuner. read [Customize Tuner](../Tuner/CustomizeTuner.md) for tutorials. + +When users call "[nnictl ss_gen](../Tutorial/Nnictl.md)" to generate search space file, a search space file like this will be generated: + +```json +{ + "key_name": { + "_type": "layer_choice", + "_value": ["op1_repr", "op2_repr", "op3_repr"] + }, + "key_name": { + "_type": "input_choice", + "_value": { + "candidates": ["in1_key", "in2_key", "in3_key"], + "n_chosen": 1 + } + } +} +``` + +This is the exact search space tuners will receive in `update_search_space`. It's then tuners' responsibility to interpret the search space and generate new candidates in `generate_parameters`. A valid "parameters" will be in the following format: + +```json +{ + "key_name": { + "_value": "op1_repr", + "_idx": 0 + }, + "key_name": { + "_value": ["in2_key"], + "_idex": [1] + } +} +``` + +Send it through `generate_parameters`, and the tuner would look like any HPO tuner. Refer to [SPOS](./SPOS.md) example code for an example. + +[1]: https://arxiv.org/abs/1808.05377 +[2]: https://arxiv.org/abs/1802.03268 +[3]: https://arxiv.org/abs/1812.03443 +[4]: https://arxiv.org/abs/1812.00332 +[5]: https://arxiv.org/abs/1904.00420 +[6]: https://arxiv.org/abs/1904.02877 +[7]: http://proceedings.mlr.press/v80/bender18a +[8]: https://arxiv.org/abs/1708.05344 +[9]: https://arxiv.org/abs/1910.04465 \ No newline at end of file diff --git a/docs/en_US/NAS/NasInterface.md b/docs/en_US/NAS/NasInterface.md deleted file mode 100644 index 76dc69087c..0000000000 --- a/docs/en_US/NAS/NasInterface.md +++ /dev/null @@ -1,173 +0,0 @@ -# NNI NAS Programming Interface - -We are trying to support various NAS algorithms with unified programming interface, and it's still in experimental stage. It means the current programing interface might be updated in future. - -## Programming interface for user model - -The programming interface of designing and searching a model is often demanded in two scenarios. - -1. When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So, it needs an easy way to express the candidate layers or sub-models. -2. When applying NAS on a neural network, it needs an unified way to express the search space of architectures, so that it doesn't need to update trial code for different searching algorithms. - - -For expressing neural architecture search space in user code, we provide the following APIs (take PyTorch as example): - -```python -# in PyTorch module class -def __init__(self): - ... - # choose one ``op`` from ``ops``, for PyTorch this is a module. - # op_candidates: for PyTorch ``ops`` is a list of modules, for tensorflow it is a list of keras layers. - # key: the name of this ``LayerChoice`` instance - self.one_layer = nni.nas.pytorch.LayerChoice([ - PoolBN('max', channels, 3, stride, 1, affine=False), - PoolBN('avg', channels, 3, stride, 1, affine=False), - FactorizedReduce(channels, channels, affine=False), - SepConv(channels, channels, 3, stride, 1, affine=False), - DilConv(channels, channels, 3, stride, 2, 2, affine=False)], - key="layer_name") - ... - -def forward(self, x): - ... - out = self.one_layer(x) - ... -``` -This is for users to specify multiple candidate operations for a layer, one operation will be chosen at last. `key` is the identifier of the layer,it could be used to share choice between multiple `LayerChoice`. For example, there are two `LayerChoice` with the same candidate operations, and you want them to have the same choice (i.e., if first one chooses the `i`th op, the second one also chooses the `i`th op), give them the same key. - -```python -def __init__(self): - ... - # choose ``n_selected`` from ``n_candidates`` inputs. - # n_candidates: the number of candidate inputs - # n_chosen: the number of chosen inputs - # key: the name of this ``InputChoice`` instance - self.input_switch = nni.nas.pytorch.InputChoice( - n_candidates=3, - n_chosen=1, - key="switch_name") - ... - -def forward(self, x): - ... - out = self.input_switch([in_tensor1, in_tensor2, in_tensor3]) - ... -``` -`InputChoice` is a PyTorch module, in init, it needs meta information, for example, from how many input candidates to choose how many inputs, and the name of this initialized `InputChoice`. The real candidate input tensors can only be obtained in `forward` function. In the `forward` function, the `InputChoice` module you create in `__init__` (e.g., `self.input_switch`) is called with real candidate input tensors. - -Some [NAS trainers](#one-shot-training-mode) need to know the source layer the input tensors, thus, we add one input argument `choose_from` in `InputChoice` to indicate the source layer of each candidate input. `choose_from` is a list of string, each element is `key` of `LayerChoice` and `InputChoice` or the name of a module (refer to [the code](https://github.com/microsoft/nni/blob/master/src/sdk/pynni/nni/nas/pytorch/mutables.py) for more details). - - -Besides `LayerChoice` and `InputChoice`, we also provide `MutableScope` which allows users to label a sub-network, thus, could provide more semantic information (e.g., the structure of the network) to NAS trainers. Here is an example: -```python -class Cell(mutables.MutableScope): - def __init__(self, scope_name): - super().__init__(scope_name) - self.layer1 = nni.nas.pytorch.LayerChoice(...) - self.layer2 = nni.nas.pytorch.LayerChoice(...) - self.layer3 = nni.nas.pytorch.LayerChoice(...) - ... -``` -The three `LayerChoice` (`layer1`, `layer2`, `layer3`) are included in the `MutableScope` named `scope_name`. NAS trainer could get this hierarchical structure. - - -## Two training modes - -After writing your model with search space embedded in the model using the above APIs, the next step is finding the best model from the search space. There are two training modes: [one-shot training mode](#one-shot-training-mode) and [classic distributed search](#classic-distributed-search). - -### One-shot training mode - -Similar to optimizers of deep learning models, the procedure of finding the best model from search space can be viewed as a type of optimizing process, we call it `NAS trainer`. There have been several NAS trainers, for example, `DartsTrainer` which uses SGD to train architecture weights and model weights iteratively, `ENASTrainer` which uses a controller to train the model. New and more efficient NAS trainers keep emerging in research community. - -NNI provides some popular NAS trainers, to use a NAS trainer, users could initialize a trainer after the model is defined: - -```python -# create a DartsTrainer -trainer = DartsTrainer(model, - loss=criterion, - metrics=lambda output, target: accuracy(output, target, topk=(1,)), - optimizer=optim, - num_epochs=args.epochs, - dataset_train=dataset_train, - dataset_valid=dataset_valid,) -# finding the best model from search space -trainer.train() -# export the best found model -trainer.export(file='./chosen_arch') -``` - -Different trainers could have different input arguments depending on their algorithms. Please refer to [each trainer's code](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch) for detailed arguments. After training, users could export the best one of the found models through `trainer.export()`. No need to start an NNI experiment through `nnictl`. - -The supported trainers can be found [here](Overview.md#supported-one-shot-nas-algorithms). A very simple example using NNI NAS API can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/simple/train.py). - -### Classic distributed search - -Neural architecture search is originally executed by running each child model independently as a trial job. We also support this searching approach, and it naturally fits in NNI hyper-parameter tuning framework, where tuner generates child model for next trial and trials run in training service. - -For using this mode, no need to change the search space expressed with NNI NAS API (i.e., `LayerChoice`, `InputChoice`, `MutableScope`). After the model is initialized, apply the function `get_and_apply_next_architecture` on the model. One-shot NAS trainers are not used in this mode. Here is a simple example: -```python -class Net(nn.Module): - # defined model with LayerChoice and InputChoice - ... - -model = Net() -# get the chosen architecture from tuner and apply it on model -get_and_apply_next_architecture(model) -# your code for training the model -train(model) -# test the trained model -acc = test(model) -# report the performance of the chosen architecture -nni.report_final_result(acc) -``` - -The search space should be automatically generated and sent to tuner. As with NNI NAS API the search space is embedded in user code, users could use "[nnictl ss_gen](../Tutorial/Nnictl.md)" to generate search space file. Then, put the path of the generated search space in the field `searchSpacePath` of `config.yml`. The other fields in `config.yml` can be filled by referring [this tutorial](../Tutorial/QuickStart.md). - -You could use [NNI tuners](../Tuner/BuiltinTuner.md) to do the search. - -We support standalone mode for easy debugging, where you could directly run the trial command without launching an NNI experiment. This is for checking whether your trial code can correctly run. The first candidate(s) are chosen for `LayerChoice` and `InputChoice` in this standalone mode. - -The complete example code can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/classic_nas/config_nas.yml). - -## Programming interface for NAS algorithm - -We also provide simple interface for users to easily implement a new NAS trainer on NNI. - -### Implement a new NAS trainer on NNI - -To implement a new NAS trainer, users basically only need to implement two classes by inheriting `BaseMutator` and `BaseTrainer` respectively. - -In `BaseMutator`, users need to overwrite `on_forward_layer_choice` and `on_forward_input_choice`, which are the implementation of `LayerChoice` and `InputChoice` respectively. Users could use property `mutables` to get all `LayerChoice` and `InputChoice` in the model code. Then users need to implement a new trainer, which instantiates the new mutator and implement the training logic. For details, please read [the code](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch) and the supported trainers, for example, [DartsTrainer](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch/darts). - -### Implement an NNI tuner for NAS - -NNI tuner for NAS takes the auto generated search space. The search space format of `LayerChoice` and `InputChoice` is shown below: -```json -{ - "key_name": { - "_type": "layer_choice", - "_value": ["op1_repr", "op2_repr", "op3_repr"] - }, - "key_name": { - "_type": "input_choice", - "_value": { - "candidates": ["in1_key", "in2_key", "in3_key"], - "n_chosen": 1 - } - } -} -``` - -Correspondingly, the generate architecture is in the following format: -```json -{ - "key_name": { - "_value": "op1_repr", - "_idx": 0 - }, - "key_name": { - "_value": ["in2_key"], - "_idex": [1] - } -} -``` diff --git a/docs/en_US/NAS/NasReference.md b/docs/en_US/NAS/NasReference.md new file mode 100644 index 0000000000..73a914c763 --- /dev/null +++ b/docs/en_US/NAS/NasReference.md @@ -0,0 +1,109 @@ +# NAS Reference + +```eval_rst +.. contents:: +``` + +## Mutables + +```eval_rst +.. autoclass:: nni.nas.pytorch.mutables.Mutable + :members: + +.. autoclass:: nni.nas.pytorch.mutables.LayerChoice + :members: + +.. autoclass:: nni.nas.pytorch.mutables.InputChoice + :members: + +.. autoclass:: nni.nas.pytorch.mutables.MutableScope + :members: +``` + +### Utilities + +```eval_rst +.. autofunction:: nni.nas.pytorch.utils.global_mutable_counting +``` + +## Mutators + +```eval_rst +.. autoclass:: nni.nas.pytorch.base_mutator.BaseMutator + :members: + +.. autoclass:: nni.nas.pytorch.mutator.Mutator + :members: +``` + +### Random Mutator + +```eval_rst +.. autoclass:: nni.nas.pytorch.random.RandomMutator + :members: +``` + +### Utilities + +```eval_rst +.. autoclass:: nni.nas.pytorch.utils.StructuredMutableTreeNode + :members: +``` + +## Trainers + +### Trainer + +```eval_rst +.. autoclass:: nni.nas.pytorch.base_trainer.BaseTrainer + :members: + +.. autoclass:: nni.nas.pytorch.trainer.Trainer + :members: +``` + +### Retrain + +```eval_rst +.. autofunction:: nni.nas.pytorch.fixed.apply_fixed_architecture + +.. autoclass:: nni.nas.pytorch.fixed.FixedArchitecture + :members: +``` + +### Distributed NAS + +```eval_rst +.. autofunction:: nni.nas.pytorch.classic_nas.get_and_apply_next_architecture + +.. autoclass:: nni.nas.pytorch.classic_nas.mutator.ClassicMutator + :members: +``` + +### Callbacks + +```eval_rst +.. autoclass:: nni.nas.pytorch.callbacks.Callback + :members: + +.. autoclass:: nni.nas.pytorch.callbacks.LRSchedulerCallback + :members: + +.. autoclass:: nni.nas.pytorch.callbacks.ArchitectureCheckpoint + :members: + +.. autoclass:: nni.nas.pytorch.callbacks.ModelCheckpoint + :members: +``` + +### Utilities + +```eval_rst +.. autoclass:: nni.nas.pytorch.utils.AverageMeterGroup + :members: + +.. autoclass:: nni.nas.pytorch.utils.AverageMeter + :members: + +.. autofunction:: nni.nas.pytorch.utils.to_device +``` diff --git a/docs/en_US/NAS/Overview.md b/docs/en_US/NAS/Overview.md index eea44781cc..5e63acc76b 100644 --- a/docs/en_US/NAS/Overview.md +++ b/docs/en_US/NAS/Overview.md @@ -6,11 +6,7 @@ However, it takes great efforts to implement NAS algorithms, and it is hard to r With this motivation, our ambition is to provide a unified architecture in NNI, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster. -With [the unified interface](./NasInterface.md), there are two different modes for the architecture search. [One](#supported-one-shot-nas-algorithms) is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. [The other](./NasInterface.md#classic-distributed-search) is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model. - -* [Supported One-shot NAS Algorithms](#supported-one-shot-nas-algorithms) -* [Classic Distributed NAS with NNI experiment](./NasInterface.md#classic-distributed-search) -* [NNI NAS Programming Interface](./NasInterface.md) +With the unified interface, there are two different modes for the architecture search. [One](#supported-one-shot-nas-algorithms) is the so-called one-shot NAS, where a super-net is built based on search space, and using one shot training to generate good-performing child model. [The other](#supported-distributed-nas-algorithms) is the traditional searching approach, where each child model in search space runs as an independent trial, the performance result is sent to tuner and the tuner generates new child model. ## Supported One-shot NAS Algorithms @@ -33,18 +29,26 @@ Here are some common dependencies to run the examples. PyTorch needs to be above * PyTorch 1.2+ * git -## Use NNI API +## Supported Distributed NAS Algorithms + +|Name|Brief Introduction of Algorithm| +|---|---| +| [SPOS](SPOS.md) | [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) constructs a simplified supernet trained with an uniform path sampling method, and applies an evolutionary algorithm to efficiently search for the best-performing architectures. | -NOTE, we are trying to support various NAS algorithms with unified programming interface, and it's in very experimental stage. It means the current programing interface may be updated in future. +```eval_rst +.. Note:: SPOS is a two-stage algorithm, whose first stage is one-shot and second stage is distributed, leveraging result of first stage as a checkpoint. +``` -### Programming interface +## Use NNI API The programming interface of designing and searching a model is often demanded in two scenarios. 1. When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So, it needs an easy way to express the candidate layers or sub-models. 2. When applying NAS on a neural network, it needs an unified way to express the search space of architectures, so that it doesn't need to update trial code for different searching algorithms. -NNI proposed API is [here](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch). And [here](https://github.com/microsoft/nni/tree/master/examples/nas/naive) is an example of NAS implementation, which bases on NNI proposed interface. +[Here](./NasGuide.md) is a user guide to get started with using NAS on NNI. + +## Reference and Feedback [1]: https://arxiv.org/abs/1802.03268 [2]: https://arxiv.org/abs/1707.07012 @@ -52,9 +56,5 @@ NNI proposed API is [here](https://github.com/microsoft/nni/tree/master/src/sdk/ [4]: https://arxiv.org/abs/1806.10282 [5]: https://arxiv.org/abs/1703.01041 -## **Reference and Feedback** * To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub; -* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub; -* To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md); -* To know more about [Model Compression with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Overview.md); -* To know more about [Hyperparameter Tuning with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/Tuner/BuiltinTuner.md); +* To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub. \ No newline at end of file diff --git a/docs/en_US/NAS/SPOS.md b/docs/en_US/NAS/SPOS.md index 189310c1a1..0d5df10e19 100644 --- a/docs/en_US/NAS/SPOS.md +++ b/docs/en_US/NAS/SPOS.md @@ -93,17 +93,11 @@ By default, it will use `architecture_final.json`. This architecture is provided .. autoclass:: nni.nas.pytorch.spos.SPOSEvolution :members: - .. automethod:: __init__ - .. autoclass:: nni.nas.pytorch.spos.SPOSSupernetTrainer :members: - .. automethod:: __init__ - .. autoclass:: nni.nas.pytorch.spos.SPOSSupernetTrainingMutator :members: - - .. automethod:: __init__ ``` ## Known Limitations diff --git a/docs/en_US/nas.rst b/docs/en_US/nas.rst index 5a267cc2ac..6f2fb05bbd 100644 --- a/docs/en_US/nas.rst +++ b/docs/en_US/nas.rst @@ -17,7 +17,8 @@ For details, please refer to the following tutorials: .. toctree:: Overview - NAS Interface + Guide + API Reference ENAS DARTS P-DARTS diff --git a/docs/img/nas_abstract_illustration.png b/docs/img/nas_abstract_illustration.png new file mode 100644 index 0000000000000000000000000000000000000000..0faf667f5b0e79e5485753110b5839040ab1211d GIT binary patch literal 36993 zcmce;bySpH{4a{q4N@ZA-O@QocPJ84GJqi64GthF2ueyxqcjLehe!$tNDU2wbV=9U z&-=c=v(CEru66!9vzBWKjx+P@XYcR#Q{Rcw(Nf08d4Pk0f`YHAqM(O@f~Jdtat8$q z1N_GSXaoEL<&L|avOG%p5cLN50nHYs2}41tipRaOL#`NY((Ci?MM9@TJG-Z?+3`IOg>2V`GQ@>`WbCa6v}S8# z^L>ws{P&^qTJ2(;@xMpn9duoV{~n1s^wIzGoO`4S#Q!~15eWAGdAh0&k>Y>PX2!=zXD6cl-+FPK&X@zf80h1ymV9+zZ;Y3{nJP1?`L6Zk(_>fB zIgfY2pCcnxI&wAs`Ivp83&FQ7AB1e_#k>YiIe;a74srRK_Q=B@88wNX_@n0m@7o*dDbvu8hHHcX7mNRXCoo@ zGQ1DEs)B!S8!0v!mw^ZVgNOSSipWaKN3Za6iR6r$yvl+#6%=q7|GgFYQ$A-)5p#8p z$g7v=1HB1M*XrPt>2Ume&$>HD3)JUEK0dmBiH_UtK^OQ}_clE;lJxQB5EEDoJ&jVs zs!n`5nc3@u9=eG%o9Sxn-AJZ57is^K+9RLO24!yE3xV~QJM|0OwL#Z@qAw@7KiqTg zq491=;(FTXxiw*gq4?&3@Ku{y&5NIm-jk-@M*E1)s@tohc(RZJo4(e|-K_<#?q=U3 z2O7k#l+^A!Z5ca#=b+nb@26kgtS3J{!t|c8D>@HOSi3%5@UCdR_*Ax%F8p$~1Cwr> zHeeriYhdfC$EumW>eP1aRc}2&ZsPUp$)ck3o2BoM)JId_@DTS$ru6++}-K{!b3%ueTNpRN|VbjUXOZ|Fu zw&-7V&I|KxzrMPzeE+&JR#enlK_ltq(c*vVC1f+`lBe{>?SqJ;TMu3EmDkoQ^PuzY zIrm{{gWy7~Oo@ryix5--m&s@Dn)a6F#0j|?|8B`{lvijNW72+qU95WIzI0qi&;|YR znzp9!{`*teiwN9VfgWnBjwqpFsj=Ib zhiGaq#5G1%z*@UjG#^I$Ums7-byJu-)LV~#PhG?hb!*(_FAhHcoidl@zq?`l#oikA z%!iEgaittO4COA)Lcr0RV09dXESf9U2~jT!YS0WdW|3MF=ky`}#LC-81g^tUN3ms+ z>nlC+^>U;YE2HXs$9rk>6y^a5PJvkaE{w&8V8+e7tAu?3!75B8pMiZB)1uQDLM^r_PJn?+s$#joqro=A<2U?e5Tm3>1vQBhQ zD4Pr+Y1!;mM-?yYIP*}{@9&FKutTP7!zTCV{rApKNr^c+d;ckL3Ats~1IhDmMCbpM znYYcGCl*?Ey*6{$-~Ev$)MH-VBH_8|Q=p!5@*0nFna8lQJ^2l#(Cb*63dI<5FKK>{ zKRPC~RBtcUM@vDm*_vyr_x|%S!Vg*Jc$DHdKl{~v$2m^=#IQm=g?Ap*i7OL5huMao zG59x*G$M{Ot9+V!;KJ*AUi|u$)CTu2P)n-5-1nhe5`Q3Uzxw2}!4G7$bxhmCAK;iW znXgQm2k$&7Gye5*G(YMT{BW*5CUv=8WA-CmxCLyfOLe_cvz8g!is#+YH;J)Ot=JtF zQC-b&ABuX`)862^vVAhZ*ES37fwu7@0;zquOQ}j}R3Y1;zy0qyR!qU(xk>!{K-6jb zNr_&!N!9c4Hc*3J=&?dgL4}wCxn9Hl%EYAShL@)^p2Xvcg=} zgnSPi`Z~L(aPg=}QtN*k7pW`WNg7TUX*{SGXX;}_`zXiag7Vk+g@3|`r?4>jQV9_B zy+jqDYKbTo=oe`@)z5hyr4(9P1b?TrJ)0cG&01g24z4?zm9J#S!l=yBO{MkNuIg#* zj-hDUYrPywc1^QW(`bwOG}rkuj}@mv_UFq%z6ctx)Zo79%T5W5)asgmb2snR1m&69 zOm^6hBo{eNg?#pZYkvNHpQ9h5b8A07%#{>nZSuJ$|AI}RQKIEU+S@?8!+eGoMqd5E z<*<(p>D!Ef|G*Gt68%L9-3XiwzNd{IcDKe4v)Dv`00$ zW}O@nPXF-9fvvS;%xNifSbzn$*?yKyEl+V$=N4{Vc%+K%zfDC&0QS*Pkj1I>+-5*U z{NNi&*%arPXyTKq=Zd$ZCx8CRe|!x0Xp60kJLgH|RUkITDLvU3V>?tBS!_NUs$ndn z7emkl?wqsK9OKtv-bpOpA3dkVk!YqaI862-At6Id@luKV3TxOqulvej`xm&#m&1Rg6)UB&niU-*78SPO zn{%L3{jYml+aL{gmZKUWPZV|rwM+Y*n=)~zD3#m^dqX8U<7?@<@<#L`MX!+bmHekZ z4Og7U;AjoGUwny}&es=jjtkVf5+ z0(+&JjfNR6PAi7~8G1;=-os2UwDyIF5@D!i%%V|HPS%Heu zk&BJWm4%GG3thMdseNexXbRY<98$x7H^$rp+A{|wFic%ruMX8EG1J77IG-F=6x`>x z2>Ie1baTEw2U`7A!N$*%lu1=l+V|W>qjRA9h!ohJ?Jw=N-QLvvKA`pA7Vz#SdwMph z_qbT6184k@7M1I30lHgEg^%9;a^WpAPD ztGN4GzRl<6v*nnQv8Mnb-Ion&emUR#6~7oeIZU8CY`y7X^RvF>0TG(dR$2Y_VSf@A zQw>crujyv%`5INzphp_GLsP#|(wOxNHL3>CL|Sie{F|uxu*~))?!vntI?jx@B_l3) z(HOlJ0=>OQRoT9p2VehcW9hy+Kia9H4LV!$2#14~r~&n*OJr-{rBi(wt5(JfJ_I+KUuSA#_mRh+k&bfr%V;vL(Xs*Xs@7YP!?L%t zbHUWaGGm8S65VsF1%O8*Rm=<0^xGK7cECalF2kM z`sLgh-s){qIRyKU>ncw_%-H`+R?VGiBmerF z@Z>WqSm?f$%O&W&J2pcZ8+@vd7DJzx6=nH-A`4|{kb8|zNpUbyVZKKjeB~-?GkD+o z8?kZ&XuS>DK^J|3u=17@W2p55bVlsM_(IV!)qP03XCiy2&)qEGpd0C^Uuo*xa%_OK z8GpKTTp8%gm;2UIJ|uR-7|j~$bEly18Rl#Qbm@M38hmSu#Qfc#RKxudn}oID=K9j) zMEA;ro;KdIWhGAXiUqmi-gyxS3hTs1iO0t1#6{|4_ha7bC;wa3N`U%NlNH9V@vo6R zauS@Wm3dRYuD}K>d%Fqtpj{CnWKxwV>Uo1kULPKe(hTzs1wsg*^S{o0_w`@(mRtHy z7V9#V(4!7_HzuHWxwC|F$idII0ele=#)3B!Z8H*e9j!Cp6-INw${qjoM7@KAm_y`$ zr)2j3AMTZKH|%*8CD|Ssal+NVWh-)(YBP3(g#9y}rz$%%v!teZ*B{zfN}Vl+mcv3Z%8hDlcAV85!B1 zmS=cIAQ{kft6#QVpD+>Sz0>O69Ivs}z6}mhc|OC~`)2Ro@zG?}AyAz)H|u_>vSoUFdR&rtyYtpF{U5H$=*!7Y>px*+ z`!$he%u6|xLmDD5RG?U!4)Jc5Wdt1dm4gDGJf{nRo#ZQTu=Y)bnjad!zo9}Nn|jav z8tNEl0yosh4KAUbDd9PLx)AJ~LMXO~^0drYxQSbDJmC1(Qzt;hy}sHGXYB$!Su?;G z_4m)uE@_|uc%|(rwK6H@`xrNRl>EbJ-Lho@9V9$lzm>CH#-WK8YNRb|NgqGHO(q4q ztL<{PX|L3#546YY0?FNet_lwTtUa=WZ@!Yn@~|iKnLB+klDXU#^4{;j1h9S}`70`+ z)CjLWTX>vwq7cMz>mfipsaH6@Q$4_YBTUReFi~#0Mb2XkPvLsDC^x&IJ$0?n9wpMFFhGn?j%>x$B6|HG@JVUzkyL|Ewu05KadX?@zw zmuLCfuAUF*WXLjoWsr&Nl#6~K#i$q13}B{#JMJ`g?hA?de`J@mMGFPMd+Kp>=u=0# zTxBHA;5~p}u6zPFY4I+*2i?V=y!hF&`uR;hRo##B=(mYA69drM{!cFQMR>Ks%HGi% z-rGyHic{T5KLUt-<~+{0*{ACB(~2z3?jQXesa{D`5OKOV-m;w*@xZ}hioA!GJW*<7 z1q3a<%Kv$(WeC97ku`uuXL_{uqh3-M?F3YY>dTR2pPQ?5FJEQ;%!pD#dp+ylkH6SI z@dilyGCr{e+R7hrhDn#$fm%`L4&e%%|Ih`qc9@X5%*~mVsN+m+ufA9ak;d&b@$Z2Y z?=s^rZtBV0eWOu~?NAxIWf!*vaL-_U6`UJ4K+^(Oqa0{fZA4-tBe%u=#VX8`=td0m zA@V=o#cFw!-|^IKXk|+6uX)~$<2?eywBAsN$jJ1bpALSXQDxbM!Nsfbb~V|=$@NhL z!5gWc(syCsp}NfdU&$^1Am*A25k*aF@!f`%tAZ#9G7JS@Y_N%1eaCCcc;XS8Z+bXh zqJPbDUIgxj6QE|Epw3voE&>A7^!2pCxGgcLO^W}OWY3OT>S}>R7*V)L&(u#2=~~@g zz+}rceyFYj+3c9FfKtdN_5;5q!hbo6ZI)P>I$>#}vWwv8w*RZP?DcpXGb1B<_`iTM z5mYeaq;&jy@Km@>cHoICTZRk5lojOwoJHjC(u;-Ln{}wNfePvrth(z9l#4kU@7X1D zbpiJ^4R6p(nt(7TU4oZ!eS#crTb%_hWY_#2_^|T-eb^`C!smN!x7=gPi=l)kDK@-V z4NxE61%$qGz(_HWr1363rQo;Yy@drwWZjQ8mURe-DPL5i3hd**0^8~*3yC4n6<)0OZH{~l<+XJJzeTj5@d9YPea?{5A^89j2>y!CvLHS-VkY6IMpXbM^l&Oc)+C8*uc49PQ7~yok1k;s9g2?KL08 zT{{kENrMveda(~^i)AicrY=}c5A8<3eQ}zb$#nfrxuIp7eh{Spua+5|l<76o_r(!80hG#Kd@7MaNa@3y(k&mlRrm}{x~ zEXCUpG=tK9@js!#0Lvh)6LQy}mI>hy)52fNR`Jp{DT?<#X0AipV@lKiZ&{rM-S8BE z52SS;4fvcMpuyFQfui)TanKI@ot$(@!GCShE!VoE{9(8KKoZwF5LZ&h$I6>_(|7%@ z1bO#BwzY|yksZ6fDpL}1Vt%G>zpjnXEUh}zB%vt2PG$|F^hsGlt@UbkWdP#7k6=TnVmsj zbvjt?cG?&%Z~}a81Hp`@(o0Tk3ws5~VWZ#irla=^Kx|k3*wOMt;oR(Z3E)~)!7v?k zXp>6~fzId&8mkik_M)~!)Kp?JM}tD;^5Hl)Er;_d=0O{fR0Uq>H=vY3P4htU zZhGU1#~ZUs4p!mt6@oqS2`G^&a#1_Ld8aHPXE?4tDoaMM;B=*B zXmC8uBa3`+Ps(k6q;Cv~6DwfdWzN9yTMTK(=^)htg|>fR@N`ah+)*6h+nwTI03f#& zTndV{3)qKXq%wSHKMl&CZ8UoC?Sg&udm*vD_>RI`K~Y3LV;?YenCU?CG=fAQv}kpr z@DdEbyh=Vu``2YQXw)ehxOy~#USV*_q??o9pk-u96%6!%k|O8UbJk-))Xmb|;&sai zn&bzM8QR@|0!m&o7y_>1j;9LH$}KtsW!SD3`qfD-*S?-k+tPi=WlN-U3(|`G)Zb7=c52*UJ^fDQ41XYJY2gV>n#PkIWYbnf+q^W8Jx@|K zoX<~?&W;y%P``JCVj^=Ar> zrA;;Lfe>g#Y5q{NDM&H1fA#+G_dru#oJ9ZKAhg@wkc7^dMf5XSN#}W~ri>J~#fBD@ zSO0YY=J3vPzLvjBm%3z>%3!qnp7btRukXIf@xIZaTRK+IOsEG`8xH#6AfQ-P|xTP6**b(5OsOLus1# zysSZc9im4500}u#?cb(Kzz(nwtw0Dlxy?FFQw82~J&jBE2j zyx{@=CX0@{{@SUn-WMB1w&_26{DOf2cG4=8lsH(|nSoxsAq8#1#Js7mHpYNkO_>WN zT*xQy5h5ZNA0iuc1o&ZiU}$3^emKxl#D2VZ ze6reW$2fT71PGnB6~0GnUTvQ%T4%s}UYqFc&p&)I(mp>q)jngMz6=bxeXlvsrxhmk zPB)1U-?3@e^+;_N=QRWJKMP5cj^~uN8I}X$gqHxJYX@q ze5ycpu!KAw$Cl;m!vVf|(2(K72cUcSpkfl#{$Bb1rp7u$#A)`$KxM%B;kdvyI9=<< zTa&{R0^5*-f?V)f!FL|Xfz3k#`$FP#K$pDyL3g-f*>Id|{3W@O|E^i{gUGr1=)Un8 z5|q+Q^}5q3tN7_aEvW+qDl2V(KiXK(D=gPDy-&ajEICeBcblir12S(P?+R)T_@52{ zTfbarZE>uq1p4aKt$FLp%0cGviZp21H;J9rpc>TOWB^%UmG4Jm0DkV(-!biZ;F4^$ z98Z`e+){$$zh<5|TkSL_;k-FsGXKH3bpx2;!{+HqdoaKM)fo}@6pY$}T228`^@r@H zodC?O_kX@=T>!}MEP09dU>Wm)-WC6p4c$%SR{0SC2oBi^#lu=I;GN%l_j45aFe%5? z!UBWX`(f{a`sd>eY~sx**e!sd>yQCv1|01ct2puR5HA4uc@Mb1H9hoiiNfXX*Y^KZ_d2j@NxLQZVGq-0?zTZ4i3^}BKJid|g;^XIyN z!>1L8jMK1^@uxQ@?B2}{u!1X53`uXyU9$l*A2m<30p^_`l;i6fvGQ|;Dl#R!wh0wJ4iIdd=YKn@?;xc+dHSc2shjEx^;A5Nn2>YIbb)YB z^i&*Dsz1~3v(6siSHW_3Uh2J1Zp7K%6FhziuH;6VSu%A*fM!=da~So#6c9pdI0cvGO?nfa5XO7=u#h-RSRpq}T(`Fa3( zO?zj0C|<|ODk5sx{HkbK_PAI#Zx+DJ6*EB3{e#cq`>AEUlF#BHwL{1rMnZWeV9&HXG^FO*r$h0eb08`__S5@KV8n(Fv|3Jey4PO}XjZv+v;Xu{6-RM&)b$A8-qSpyy3fI5O* z_;z|FZ(P`Zyz&LRvzYGqa>$Uy{zQr(7! z0pBF$lv5@5n#V94w}TJUR*zDjbT2Kw)cPP`^*U4keiDGPBrE955|#Z^ujn1T_Jc63 zWTP8x!k$;qH?j8SM^enP8AUa&q6s!h6Mt_ceaDiDB|!n=Ie{Vsg+6?kHSemJhCwCR zL*8QF471>%{3OUkUGG?No^KWx=6ZeLGrstZ?-?Oiwm^yWkh8gQB>^j9O%REqTBT|SeZ*)z2zSH>TbKm z`ppwD2X9yxpkrlw2s`>BbkgHua*oue|&WNiH;m#NeC<`*> zzadru(Xa$WL`N6MF*xqmfYm{^zhkO7P+m#%4&ME`KJ(sA@NTi9^1!GCZHtN-C(u=X z{a$M4q})Wi#2}zKEe5si?-&wcwzafXbzVQ~M>y9|nagq4U>|FR7V zYzw+(+)~lOtAv69%ip3-{AGi#d?u@`dU{Jp8PCIw@yBx;fF@j)!#;*7)=y);cm(=F zeaRZ>(keau7(?!fmH4Q(5-Lnyb4OJRoRS@>G*_6@b2+U!R{k&SOtM$|p_xab)0s-| zk*TBKz5hbir*Np?BNr)MFHM7l_bB!S%;_hj@j1P~9ynAt--<%yA;m#@@eWXNui8zcq#q21_v>^c3(%MNYDqHdvg14N#vszj2 zPC;5PpMFU+*#a9zQYe8(DTVYpVwF{B1?4Pv7F?|mRUxB^MY zT$n^knNd&8e7kYHL;xIYjHw)^h8xWKZIwB7hGBad-NP-EtCAgl+@^3}$Qn2}ycm1b z+XYyzxvppfUo#yfaq7t7uuGppEeuniP7#Nx%DMYtlN27vKJ4+HyKO8{{BscpIy%a) zxs?QIRQ<4Q_!TVu^jrx>*TL<7&zhSVP)s2GSLYJ3B5R$C(t*vEv1qOqe_4|!cSl%; zbX#nME8}lojhO<{bfw2 z7z)0+J7Qy!hz-Z9Y)g9`-Q5#PCp)Tzc@eHG-!)1kpG~&ZrUkQAyyR{}*PHtSo#u&* zz0Vjz8PXc5(&n39K{(->lzZao?=le|02WAVw&C|o|P4gPnq7qa*Cl-7rc(O zAl!loXt3RbI<{>gsvJCM9)yMmPYP6<}Id=d7q2wd)04KadDD&?i$77_c_6z zksq501g*D=uJk|WI$?R|SOpVq+w}rnyy&XnC!h&+)6-9nIrv20o}}bQ?S%ON=z8=#^hL00G;l3_tN5%cK*-E& z`_UghP2x*!r!A68w}a<&>?V^_!44J5@w8^!vPGxh_{>xT2znNrfatNCt}b|5pho5l zWIbT0LLQ)~+p3Cl?^}kOL{ct;(#jrS_+*I!1sI|n8x!kYgtE0rZ9pWP0cyG#r4a&3 zG*U~-%mpg*lch#NdqAZ*k-7e>f18Dbe70sFfBFKKrK$~5SX!on>7kI^$zH6)UJp~|V0 z7$`V8EwUDf=MP%W_VG6_rE8X-&&|C`kg@ATeR$bNi$Y@s>J2P}c( zII-o(apa0N6nD8e+1cvGh`KS4BGulStT=`8EPFuIe{QIHsDomhm`2bn(yqCnT+esUoV(L|O{rC2IFL)kG z7cK~O10uG41Hgi&Dc$#65Klb;i*9vlDb;?l10?H`Q#%lEx~9p_1E>pWos3>TS!OcX zn#TrIzW7(F?^u0-^ITgcXr()G42+vEuC&9DgN5V;Pd^1#w@n-IVr!}7p7Q<~-~qML z>rc%Ct*svOW_wN;l~niwm|Qn`1%XNML2t@|_&?VQ4iDrk&tuXBSX>rIeU~{}A3jYN zaoiMknw|0oviMG*#WN|FMLCnW*>L^n0*H`lRdi z*MH*0*1r6n?9WEx$i14`Kqy*-kT1he!=GogxA!r_g(k}>|LB6DEaNZu8CrPpU3jVs zX*?d|1dkjmAu_3yY{sjbX)Cga*)(B2-tDI%MS%@%cWY!~KWE}H&sor&?%fwWoQ&OL zz#6tltD%=naZo$4{qy+Vle75iWMFknSIbN}QPwprj{87(i_VQo8u z5c02TlHTtGm3+n^dT?vJ)b8_2Ka4uv53RE?Cr)6b3d zj@Zr)8VdYT>Gqaw!-a9VOhi@DQl1>_F%1u22uLDa4X$2VY=2u(_(k$Opj(E0iW; zMn;VI2h%78xqm{k0I`1!j~hk!AC6!;q`*N+eq`X3gE(=$47#CB zxO?xHm^Rh5F}M88Q%3G&Kt@`yoiQs8+XP#`_VYQ|HR8$Ux}AJNhyp9Y}l#$C;U4)3q(3E z69?OI^F-P-`B|;UqH^QO)(Rd`ca^__)X$DgywqVNNE!D-E)B@^qIIURLw*_saznd? zR>t*j#^A_)5F(OUX-$;!*An;p{Su&CDKdhYg=)4ULEXbzR<--n=!2+Mw6s691dYT# z(ZOZjmcU}>DnY1yx|xsB#Un0iw9e5)GQPq=gW(Bd-s8V<%A`1OVfgFP=vXx64Dsir z;eY0SJvDdq6tkZ!-{I4I^i4npL4}o@pi3zwiRfkV)D^=~dYJWqN#Q}LC)@Kr-5UDp zHYUU=(-iNWtdvr^>Zsg0JEc8r%187AENTutW1H1s@n@JR9CFFU7W%ovz!+E2%IiW? zmO;G|arKDJ`>;)taQxs)Ha+8puKQa=KNLPOv`q+9wkC{G(dw(GSCjH0RpGb*MaPj7 z<9j@vGwLPEhA2JFHaHVeQs|Bq?9^1>NtrNuG^SBXEH&&bFLXyb`;M^GthvuF6%cBR zweO?l_amRH;vMGcgs|5vyY!W=$I<_K!+A?;h;K|^BmK@!HM5taDC7A$njQNUSi|ukNhfzn=gq7at)YT?HkoRCKWO%QC1M7Ti>o!3Um@H0}~g!Bp_@6 zQdT@yyJN6dCHYX+hz8!QM*b$}Ob!di)Wwa%v$#xXn{o82(sKZ9#2Ecdb|lc^GY3FVTBjHn_vW!`hIZKLNRNCw68hwzT>Vsj^4 z%l~-5>7k4*t7K3ouv60Uth0C+pKXofaGFxWqj>HqKxy^-&QLq5N0Irme1A1c#jV)4 zQ$}`rZe0Ef3WtDvBu{E*E92$6SPHRqiGh-Apv#jSK8)|kQ7)UeOM6VAID=7OOrrdUsJ8NoBHk(@L4Wi!Dkc|Fio}_h$>`@QF%@p zZyhbwt<)!F6P$Rhj7WfIy?vF4#AH>>KkFZ($7z`%`SB!H`88H&=vodFzP@WUBAv^l z*{I$rQ}Hu^Jl~$3KQBNC&3O{RWk?^6zmM8{Ld&u#-OI`87COF~gCkt5`)DT-m|7++ z?-kP64$fTh=9@|#R%oubp+P1L{C$Y~7XebrA5zewFL-~OX|^zx{7cU7CpO($T^#hbIq;{K_c)zaRlPsE7Q^NYA!^SMDQ_D-Ky9YBs|DwX4Oa;H%pXs}M z7c`+b_?5qW8TDockpuROl5O3ak}BY8B8G=RJ#b#9Tsnd#pTOfD-!NCooX42vO+oC> zy-c$?+YHy&ZOjEiLjVpr1~&DvtA+&)chJ*7ep;qb5|f(Fw}%d0ukmjRN;g1kfvnZ~ zTF?)#aZ!P+qaPP&B;I-z@uw(|<8j1JY3)UXg&ln|8qOx?61r|H$+9_2$19IZ8zXlg zk=sn#{sXJx2t+&T1_I)+miLV!gSYwZ^I{(e_qTzZ1Q384Mo1AH+5iwDcNOOP&h*Ex zFaB<<$QsWtqiloOk|lGli_`)bzRq`gJ)c!yqHqnA%}n^?lu$xqvRX_6>eQJDQDDsS zH9%P>q*K!m2rdFyJLC@PEXYc&o4c2cspMh|2e|abeY9}UX&^sHI(V1)TA>il)b@V! zDNxvf{y0q>ARdOQ(6M6sSE{f4kqvw0uPQ5LuDT?wdMMX0adHZv*+Gia9*Cf7+#hLN zdQDY|L{V&eAx*J(y%PBQSm6tQmP}xyCzsrQ7Lf<`3~e^RrzhPq!0{f?ijrT$8A||& z+G#Q^@h?D*Q!9dR*zHa9tbPN#zP>~WS_sB)V#-C*g#wIKkfMXnX^Bh!9eLyglr>}~ zq@5y*ikz4;Rlf4k2pr7-<S?v1*$IPVT3rHw{X^ht!Vrb8Mm6ilLLtR?gU${>I2IoOZ0Ev!@k3qz2$z#s$ ze$JRy8lm(0P$rL1*hDLwfB!yQhTl32=DAPzHTd=>`OX3GbD8nv0fp{1K23BSZSvmp zpt0@^`!n<9GnJH2rD;*M*f25asP(6T99U>6jXOVC_&}*YzhV?Uemo6)3QmUHfhIY4 z#n3+c3{`^dF>jxw^Pa25%O#+%>PM&k%PA+)H@{=vqTujkc46gK#>$Zy@h&ZrRU}L* zDbq9lfvnoebKvYvNuW=)!mpo!B{JYn8B(Da>%8oWN8@4Y>V-cyxtG7>E?<^`B1=um zw06zH$Oo;nUaD?`#H<6;&sVzDUE=$__rvXO(F8j$`=-0aUIp$Cq7C<*@k<49#%B2} zN9Eszb-ZM4k`2;eMSqITH*LT}nKQ+P>O6AIswgA1_oc^(A4sZFZG46AZ^`~;*!?;3IJ44QVc?m4GuJ)UaqU^fkJbTeH?>@DL zwe*(`l|#V>cd76Sv5R#MJ`f7RvWq)ENy)-!Z$yZ1j)YR#T)H0L6Y;FE`Qc!%KEm_b zL-MZR0i|du-zn4mZ@ypMevkelU@&aoiX|k4zed4aF zL)4or%|z3bb>kIHDG|b@LgqKke9ggVQTHs7_s*7TbihoXKcCNafY+G=d=;c{)Ev)e zYix&m4Q3cE(Js$qG47TI5JssSe7lpoxQNzPbA=J3GQXHJ2iO+Kh)qXmJS=Nwx6Don zw0fH~EaPo+f278|1)veU9;?3RuTfU;T2X}#fwwj@^%D40Ur-CeDXks~YKSdnBrF77 z{`j5)e6JrKmntWac(FSuG+a9{=$3=m8`lfU`!o}rAuJ)O(`|^da!gT?Y|CAH+`oOC z=`LMBBo;eA{_FO~O3R__-5~3h_ynCHQr;!uc3~S!_a!H7+A}nJmeYOx0snUY4*z!D zHhuznmk?OA2$W2TN3i!{>Ozw{CQ3dd_ZcP6cA1(A#L%%*vmV)n4T4vz2jPgsqA}lP z%sqOS&fqnWNtzfUlj}3p-Mcf*1d=s1+(Z%waB?|dzr4E8(=dWRVAh|SdbM z26`87wnx`7w39SpQKvn1Kza30(km?N^%tKQwT+i9$?o-AGl| z1nkk@5PkoE(>pF7w1~~U`uOK2B9_2BLyS>*vas2 z;0H8gr44$6JQbq%~p> zy#fZbHs7m4*AdqCzfbG~XMrK@6$pr^Q+G63Nm0%#rI2f4dwspBx5baXIo>bPAtxUMAfF81$H!EzwNePS#)z-uY__71hi zT;aUxnQ)Ib&Bu&RuvwiHy|Y2I+vA5wu1?--8!88Xf_8rC_dC?#80;Wv>I!njwV}#O zovq*>IDh_rzhg*EQ26^Hx-PSNs(N3rJWeO#(^x=)+qenwQx2^&-GXxTBhGl4ix*r( zraB{*4#29fZ!L0ej#`{n-5Q(j~eu&5! zvvFzsoowRZ8`86gf{N)=3=;!+8jzLtFew-#I=_#;qN zx2X^kJk;$aU_8-Lh&IKIFgxHdYK~Vmx6;82XiY(+MPJ7BwWWXaCFTABs>jiDRK5!5 zh1QZHnZOGlkdQcrg4qk7{{W|2f73Q|hrSyLnfBcnze}>C{&9~%j^^4#AIvgJAD>JS z=C?gO1u||2mq}fzH}GgL)bCw_`Q?@=xp^vpjob47_=BkHPRru0n^W`-+!y!;6~R1K zv-LoTx%slw?G@KNMUW9(W?|W4D66W~6Gx-pMtHVEn$QetdiwZna0b}gs|$LyklkKV zaSj;PF_7$Si+~sJ1I7@VCu{BWn|OpE1~?}wl5?7pZv;|-(3qez@Gd|$q3TYvw9If4 z<%v3wjz`8l-?)Ksis_xJxSHF0R9Bbv`HP;em##fD>wzFMIS(c$HWm^Kq3}cYSzui| zKo!lpc}nM#qx+Pfbn{+AvpRcVC!eG^Re;~0oPm#dn8I8Kx`i{O@mxZ~#5bc4fg*Us z+pu_d4~h>%G8>RdZ-Rhf{rBHMuwOF)?d9q_?YdL`8{wlL%R|`LxvXcU7XQpqafu%otX(t$0r7lXZ9FjV5i|(;9#QvNwYx= zz77MbV9!VSWQXU_Zw@{LJr-^tdb{JtPu&qU8&(E ze`ZC%u-t9(LwE!KXA6*xH@IqLA1K+C>IaELgbQo5mV~ue^tFp07_0T$JZ|n)08)kX z_SgOtK04oTjPqv)D@jp_xStHl?1xE$!?Eqk z;tx|(%i|Kg`OROSuM0L3!aD=Y+iv`VRsA|z z#rxexr)Tsl=xTz29crTB_o&o7*c6BTxYW`NFJFCa8^j&Ln|>S>=MV;^XK<>l3sN7_ z*s?RH6#p^OvZC>AMd<9OizDfgA-=Yq>UU zSI6)SU~P3eb1438y*@E9QSx&Z>#m1}dli?@q>h>fL~yjgdt@P@aCMCwCQUR(H9@{e z1hLrC5zG>Xv-7j{!y*PUgjef85gkkeStH*(wBZfS9EYNcyrDEmC}i4}{Q5Y*K&T>S zeYu7C3>Gb>zidBIy0IW6U3Vv&&5J;_n+;_Qs9jw0y;{7Pj3*%2%4OU;#Iz@1ccD>) zqjQ$=JbX!uqh>$dPPR6c0g@K@zM%v5H+eIcQ(7K#=0Zdm}I?US6$!qaV8 zmyp*({8w?hV#4jGY;AWa%zQpx6QC~?SsG;yby>9lwu)VJ#9tP^?l4}g+Zl3aEKuxi zx3jB@4pVH8%?vRtd+-73NZ;$bn?U8NU4*LX)nnBT#3mq*NSlmqVw9SE9 zmcng#Sc$=v*1a=qiZa{SA;Zk&s*dq8AEUr{93jnhkVanGkmBSvIqSw)G~34fc?rzx zG96=woZrx@3wis$n(mcqVI^=sYTxI&E|tN>$&onZ*I{M{f#I~`<~&Mf5LSh zbT)AYG1^2sqBtIIb9E1tZAfXPspbR6#)~d!IJfz*pUX?~QZb63ETVId)baP_1?1aZ zhe0u*g+o7*zLE{84rkI!Seo5b<;S^G5NJJ2$;T=@(jgK}C~Z{^H`3Bv*99?hTrdQ{ zk5O#h|6cW4W8aEeHe@iK9tWBNy2zvUlmMA{haOJips=rWq02sKeE-BF!I0#}r@umr z<@!Z&=-60jVZ;FtG~4ljaz!_zI|3uNA+sQ3Hclh@X|;wIkdQB?n4!*V`);=u^fXF~ zMZhui!ofwbLj%Pib62s0Q001BUfYvjsOnWE&D(nR|5SJ8(Nwp6+s`};ZAF}&A!9O+VIyP4GG(S6r9vT7#zIPp3fq`MW|g5#@Aq_F_wzjOTJKu#U+*7p zf86)A?sZ?+#ooW)d47lE_#8I(oVi@qmmOi{_7gl(3G~ii%za0^ea38^3$uaS;}e2E zi=((nN>9!Ky8dx5Tr&JjCe6ohK4AEACGfGLuKKEnpX;ztW|r`LsD~wR`V>M{;M{J$_uE=t^5XfhZyS!CwVT_kmv&9F@Jy-`Cl^3{=p!k(6RP2Z z7Y4cEh}$l83|@Fk{aLm|NoJ{K)60={!aX&j(aX#M^$;8%AZCk{w&>`+iGAnS}7^LvRUHflq*$ z0_B{}?zq3E>+PT5gsc;N+JLR!ede!|=!~HrNfrnRZkqOf~HG zzLU7GRK)&XYPunHIJO3TxDwOq=QSgyDWX0N%@8lJ+6$ef2kPi!oPf;z_eGGz>?;#G50PBzTs07K8C);O4Y;m;_^JWy|rf@Me(YScR|T z2hz$jXBMcK@+vS=j56IF+!tGCAblsNPS~rx>ut69%+IsE_)roGwLx9mf>7QEn1yZ; zcGo5qP<4uxc!JmJ&pE@o#pOXBXNs?U<9rY##8MD>m}{ zfzi5=k(|_-`#y5_LVq0eDy7+i=Gmv`LW1*?TEU;~%n#wV>CXcE}O4&7XtNoorm z0=&SzEcIA#-6}f!hYUINDrs6<)552jzm5Ouhw`}oIPwdl)7Omq_>-c8gYrCaw`Tvw zBwbV;PijhBa#6Wu3~CaDEmJc(?T71?AhI=j#U(W&?}B4OiPEKkLuhGd77U zK%YUKnM>Vd&v<7!kZpA{`-3wV3mF922ITT2?OxZarRdiDy>HXCeIOop5zYY8r_ zj^bw`W(*QqD-F#{%FApoUOAL@KmV8-E|nD{Ues&MQ}RL?T$mwRKW7ro+<1nDtS9Q- z@9)4wy@M;A@pQ}`XaN>elk-cCz}fp3+Q#hsau-lv0Q8~gw8VNr+q;A}Gb@wGF`Aor z=IKhM>E$(85j@{Cl0GigaLUWH&~KFC)UcJ>4k?aAo%W+(ijMW*Ko4Og>Htyc(PG`4 z#3ph1B}F1Tv8#%--OJ|<{Ya=^Jc!SW4<_Buu!h0-g2xp@Ym?+Hqiou4!;RcAvbv$W zVkX$tlW=dnPkW|(L@5V;T4VQ|9+wH%=g=_kGAnKpk+tI&%zvkmZ|C#GZHZfGG-C6C3UqDVr59vs-A zhcla+x@8a&d8V4(Q2!7mizW? zN!nTL(!KfYk|NtKt0NwFj%AkeA`*-#777#Zy05i7aSk)R1#3#lrkqxr9;#x(`;J5} z^bQ)dFK@C(E}UbwrLFz>r}(uVn?|>$#?;K=9~}n5m)}ljmTLCr)0E$q)%HbDCvU?- zEyvO1?QjqX4cw7ciqp~HV+eo^mkdKxz^N{R+B1n_D=KzF*tR_@dT~gw*S&i z=RwL9zhTfy_F;ugsBSv6t-dLF-1_%t7TFeq$x)rI+gUzuH??n=u6^)hP~hqd#+$lX z+m^UDZPZC9D4m=4IC${mRGspjz4@{$%F9d8MD9qF{9Nm^~L9N-zWOu4)v%(@Wqr(RV<5?_Yr zIS{E6J<%3L{H}t{6g~|`o%x$+z175TI^XdjVVwNuu>+9@dna78m=h{jf?%Rr(!~D` zJVe)PUO5!D`=~Z*aQU}4r3nt|U@9YCSh{GuK~jvv!(2V81>di-=6tt9OimqNhkj`v*fXzn~@;Yk0QE2 z{bLaaA;twzWE4jitZ7!77;;cd%+QZGk=nAlM}$97o|NJT@w(+&E z!9=G{VeO|(8Z384cn_|kF|^?zsNs~ zV=0syZTFqk@);4f>Nb!tRkN2K%aWOx;O$UgRLlO89V#Xe+xHsCQ0kj&3r577eIgl+HFE z>uEmZxE48~(^e3tqfkd5w(l%kq;rwnS%Lb+;#uCAGfauA1ZPs*e#1cK-2wK;W-sKK z90zo?3J;JO1r#oG=-An48-*q?7RyWBPQkbb{Zq-+U&e`fldE%PZ^srRQ!nDn+WQ+z^)ze#nZ zAMER|It|`u!}=<6Hrro^O9V*qthUAaPs#cXkA}8S{u=7^8OJ8FlZU;~;SqG&6bN5X zhDXU+mTgsz`dHFY+%~5A24>1KRY3LJbiUg0liHnEXx=}z-~8}n9NzVsxqeyeBfSSt z9NP>J;3apxrmW+m&Nla!!wY-0Q?l?SV_y2kpF#4Q)X$SXu{0=QFZ=aumyojsU8|F;xJL2@=&wP=?H;0~ z{f-uUsFTSmfZSU^7q|gppZcv#fn7yeIiaG;t01&4WX2<3C0~grQ6SqL@gcsmObI!C z=8)x)TO`Ux76TGxd?tdK41e3GXP6YmHI$2*za)&`@PChYR5DR}P1_P5kH=Xn9LoEo znkPhRjRxa(=PG{XQWL*{wYHpR1pJQoCVJ#aKwEOUt&aKIp?^1}^0$_Zv1(nT#eo)V z#5>y)Py`Xz*)!AO7r><%;V>|(Viq~m=|N4UuX2lO^(^(*q?0NHoH5xE4vViHf|>hh z{d5UB-!AT>u$9I{?4vD=o#R60@K;gmp})$=^;InPS3By$-z<4mgG_ilSXU;LREe8C zICp^ELGDmhFt%0ZXj~}_7%~sC3;v2iAQ1j25+S3&Ok;L#6_Tc_LF*}zhjlv6M|}OY z(x*X3dxg8tSS{r~k2bTa2lJ1~;(nkKV`OwDj+P%#4i;wi`PQXPvU*)p{=t+En@NPu zJAj#dTei^N7AIKRBg)Ilae>ztY+e%ScnN|F*VOSCJ=#JHj>C0vprR-c$~X`sz%G_b zEiQ9SwK46j`)~K3-@1(ZD7`jlX1B>RYV<}h*ecL5c?MA#Nr?&bWBJ5 zk@4NBx^i7pb4@UfI_OSw^ViJGSwT7q^B+#_OS~4*w5A7XEfT4J>nqd%X=Lp%4Yfg2 zw1jTMOW1*#gkGCyjr_c#rRu{GX)EC0KmIjz9#27+;$)#S_eb^W{+Hn*Zk9}_u%|7G zvxSwt8g&kLnwUvSU6I;48S*j4>Kei%=)pZu&3}?EC}QAz3Udsl?graBy`V>~LpF)v z0dSkVu5tVD-6wn+xRwC>;wy~-Zs2%!gUS16r!3w&-V|?)l)z=JI6o~e@6rLc)3D-M zs2An~NnHbos-F zd-OS`e)i!bL*KV(ERkGjgFruMm-z0z9${Q;B#YLA0fRkX*jo|Z!qpc9VzHEBd7Bu` z0G;H?T|?wS+D)~!fN7<8{Wo|+y)-y(L{NDBusy(3RsU9yn7IC{3=rs#BxJ}fVTI)P3CL4o^#4G}AB*@JQcgHHAiJ_A{=5y2+ zm2{=qr(%{bhq26E<^0s1KxZ79yB~U-7she*sk9Be_UfctY-{nt zpG@g);9w#a93{7Z0OP?omdE^mAn7a`rQN!qUBuux0G`YFj5-Yb!P5*3yS~m zw>jlMJ41g|w`*pAW|MzH1lVI3mTQPU0@`<$(eH-r*=Vu36)1?7Rh#wTMcz2af;@lTWo&$2w0#~S3uv@2%fP1P55oL?h~D1 zEUPGSwe;=VY}r0UCOQS0P8gaY@fOTNvk_inc_OF zrd4m!Rx%qvP1JxGF;Bp=-s4QHe*Ac>ODJz14gJ$PCTk+6Z^FuR{ttfzOpXWFzQI~f zAO!Sc2ku9+0TQGB2jSj)rC;qiq9ve_YC z=9t&X+XFnG5ysC+@I$g_JiQF9&biM9ckY!&g4VyhiC^aLZ-qxE&K`UPduIoBW!pz! zx|Bp~mU2R1ws-7PO zL7%@`+<3+62EY4SwsqN|%;UBhi1=!ml_IXSMxTj&E2sjE(!wfv->EKzTF|L2{DdYY zRZ=S1MU+sH3rE8fTP=pAU-)VdW=ZQ-@_Uywjw+JlFsxb#r}$zZ#-4E(R-QIrU}--E zoF6&j4MukrTbSg?gcjTu{_&OGN;Ot~hg;Gv%8?leT{oa{J9o)}RHw#oD$`*Y>=&-# z?t>>>!|Q*Oykt1X!B24?7x81?$5c(D3yI--v9(Mbb)o5OUhMnCzzkQz z!Bqh^D(z8yru#`(6F5g`GtM2_ijsh*B&RP6+*GQx5_mqG^;uFu#68?ZC>q#G*sPL} z(zA(N(GEW5*1GB=sLiP>xv5IH}#HCQTx7^3%7(lbkZ4MlAGM67?>wO4UyC z0t)#hvN0{>=^=LB(X0-2sVs3$VJBrmll)+Z*G1~o2&N!EW)Sp}`gK@YqAEWB{OSDE z54Z)MN&aW!M!iaZ!Q@!+OHyfa%aR&@KIIffRpgs`0*9kol$_Vh_;IaL0s|Bb@RKVg z@L+q%vhU>QTN^SA-bOGR<$-Y z7=V{{F<@=}hh=I*A&>S^b?rffQT0@4Q)a=P!h_0bpcW*+7#9v}h4E0bk5$X`Q*FDz zOjv}s(F>V-KyP$&y7W=&*a@p*iV{tOpYUQ6y{PNbB=3iX%_4GunbGntK8!!y9D8A7 zmuIk}<+9uge|Q!bc?C7C>iI{523^%YLiy@KT!el}nL+|>rU@PCR4t zF1*NTnvxu8_Ep&Z*4O#&k{gq%yU-F<2^W`zFEHEP_tr|IDL!Nx7{|4J2R2?&{c2A)X;Ur_<}+oTz-Bu>FLbWdDr$ z3mkH_EB+|fU<4L0Hr=p_u;mQrN527e@=#~UXVt{0VPSA(ZU7TawKyMPJ3>UR z(o1Kq>zUam*i!ylBrDGIj97#F$p=*r_cNm&(8gfM>BUMn&sNwbjC;CE4$sP~wCGYM z0P3Q#9^56ay<)YAhq=2xoAbUir-%A}$$XjfUi-O~ z6o@0oRDCc;r%%7pJ8|AgR$n!<=;sPJ>8_gy(SHR=sD&>Oib}h6mGXmBurn!bcUpP) z1jI5%TTctbmn9+Y@t4X39u6?8xK6Lwzt3H;)E4_t#~jV*@HcrvL-HFeo~nMFj%h#i z?7LL=v@gI|c2YRuu9zNd@xV)aIWQ3PwMLELoc#l;tI!3E)Zm?=|iI0Q<|W2(n%6T~R6MV-RdN#`$#tT|4u)PDQb zg-L|J$>g&ZpzOBH1eEm;MY?H%ul_0cdvqFYVvm~qG`KdB;Qqk`tGS^h2a9r$y5OgI zHQ37DrvAi_l57ojilq9mrh+++zyb!b}@5{rBqEBHmiXb(POTPd$`&|)4UOS2k z2zoaPHQ^6$;B<=uXiL!-ybg^3N*?&Byt~!cyJvAjyl*QG;k>f_@SxtCb{1*RK!Dwgt!iezaO*|i3q~C1RXP+)?8z$L3vl;D;R}t z$Ax|F#=DbW22M}Fds$^?4X851*=dOnKfsu}ir}-m$60{=o9UXTIw!<_PukINR^};C zcblZ=w3qefag~wOCo~k7K^8I|ZVqILZI0JGz zl=$L5!^>VU&VIMQCCKz)MV#h}R}00V7ht{kAdz}_2Zr{sEx~JXSaxC~N(Zy{bmq4a6xrSSN3VvvFzf)GC$DB+@Umr@9W&jUWJcfGyP*b zrS+BhbFbpIqgK-S4-!vk2P@lHVs;F>X#UHg_ml5{z#F*B8_{Qqs_{cd8VFB}`b)u( zc?Ad*4WRrIT@`++X$8%1nt?mm1nvHs#4=o4kUDgvSKXfVn+yo=I8E3MI>iQ`JEn%5 zXJLmn;t~k*mQW>>&W|oNGxc|SSl$?Qd!|9z%Bvp74ya8$BE41U&XM!=nn7siF_Zn6 zJKtyY?Pub+Fy6dxat}ntqpRcE__am&@UDyFE&*8(%CR2+jA~y<*cUEGh0#`k_0!{g z>w2GsVQ4XI#O?2I93_5(tEndVVyu74A037RPB?cjC9cgPj)Diz7&9q@raC(dbt5rB z24gflWW2+jX9|<6&iZI$y@L^tt$Ql-q;ndG+SsLRWzX%veGrP|eyobTRmsxQgv8$J zIH%3uJ$eVXS6lWUmY0WCtO_A&UC>=kO&$Mxf%t{81*5~SmQxx99q5CQMCp5{u2+*t zuRq8?N_Oqz#*a5RN%dGR!d&0&hgWBFNf^Pk!y*a+H6b$%w*Odcrr6^5cFe&6GWh=25c9 z+V1E4#Q`^sBc@kAq&7fdZ@gj-A0pae_g3okhGq#kBhVJg;x7UN(qnVAT@#}Z^X8H?xuALFOTeAKf*rLY! zPt2HO;ox&_3IIvPpLHgY+-pL5JS(5guCJ!6PVTZ&b(e0RF}uDTl3*&rVYR>aodba1 zcTRSpk(1<>eg)Y9o{beK{|Rf7dIuxxu5oMBEutl(yc_~Yph&G~O`GC5ZqK(TK3D!m zQ)b?RVnQFI^zfS0JXERW;mY7J&t$#MFMQU10re2C_&;gaTG0@`m;;~|vD2%ccJ-4{ zI%2qW?^v`uOhN@#^`6o-wwC+0z(B}%#Y4#$%oJgt*jPx=Aw#b}u(SGo{d^{uhFJhs z7jz`{B%1hi9=_$0qJ+wdqf}r1glMH5Vz*A&?az4-Y7-S{D!5f*ux(09={*gmS6?n> znr{xix!J0(NvZs1np?ME;i;+io<96KVIQ)SSGpXLzGn5Hr1Wfx=S)PU*AK%Q_lZ_7 zq1hpRDSE5Jj4gMepc=4B?2aLIq{yuMrFnB=cMx3SHX|WZZ!=R(oo{{jVks14B}w&z ztfRenD%6VK^|vw?9!xbF@kg9bQV{Atg1<;sF`y}IFM1_JMrQb)=Aruzv>JbF>=Ssj zne?sokC-#=_Y?5cp%}j=L&8Ezjw}C$4E212U4^6PnItDvn;!{t^dhD%`e*((JGAhy zYRs8Sc|7^8vSgpjK5Lf3gW#kRc_2p=k=sC#chN;Y+-2x_my?(~Z)${R78R2iiYyb$ zzst(OtPJ;JV{+@87t(=jqE=kawwQ>j#A^??SMsF8pbM-dK)*g z|HczApp_ckLuFk{D9`0H=MN0#!mpSD1LdmI2NAA_G(14a>mko_5J2zhpgA9LD(fYQ zHtt5XDgtirg~ma$#iT^T@|ii)3a}HI9ExjApl2J({;0k&?D^gs6v^%g1dCAFh@@JD zAxtt7Biy*jr>PhYe}n3(XkTo!CYL^}_YOidT%|VJ@aYR`FJ7%#13$sDA>M52I?0Q- zvcwSIFJn2o@UgsUk$^Thw#t0YRSH$|cZwZtD~2J=PLZar0eHFr396|CG}lVBwxSV^ zHY~3=_x3^|PYKIBZ@RDn?xu;)UZd#K2I&!{(tux0WawvAfu^o*uLy@vjG6iWZBF1BNHA<_ zXamc|CP_p^0>ZlCY+m*Ki9Zb z=?2038hh$L&wso-Qc3}?*#n|E7JB67#OSgR^E>U`DWHL`L&01RZ||RVmac5{{-rJ9 z?f{af#}%9;O@f9B2sH_{>|2PAI1gWU3=9&cP~ZZi{Ji9)Y8MljG|avNSd9IT0r>al z&5boRh3F-}UGYB*H30hcggYx1n05r>S}o|WC{Gd zexsTJBQ-1@DitU?Lg;cGB0ezFPF~UxeHkQdSL4?>qAF7KvN^fp@*|2?%?cNBbf-6&0} zp6m6bJ8=&>`ePgBURPl1)qsXd5*>lCC};polOKxv09^{~uN%SjLYxHJ?c^263}}SE zXfvzzxGz50K1PKMF*%T-QoRLd;M(8yrKMmv$aWr!++yT(szD?nXb)2wtIJWticEc9 z>fj9waSSNii@>xzhd54wFw{^Yf{P0h{u&W>2+oUD?K{E~C<3P*b_ozQ0wPD{D{x9Q zq72RjgM(t1t>zGl8EiQ~)v~Go@~Rp>gaVYD5P(2R>4eJ=F|%h)L&YepM_cvabb~X* zCRAk~*$v9UJu?}hLnQ=R$-<6@|MWw*tm11}34j-f0+$SGWI)oAN<(g_lR#o zo6-nB$k-9k7+UP_7K;Yh@1duF-6v8=wIXT)-C=QyIzXpP0a*8W2hbX8*8$Bxh$Y3; z412`+!qy`IfcFo?>6WuzAhdb|bhUra^=*L--zN~h(}ks?@}6NKj(i%X=0TvBH^9m5 z4~#Kjv^WmGsMr=KDOKGt8y)M(kq_J()JVUBgantw6===jhb`X$_~kN^?uI4%L5jxn z$Cgm40a}*2b0vd3ppYqPVhx3BSR(EL(7(~!enMr05%3Xwou8@NypF9sOoq9zr32>u?R$Bini!1%C6FpUzxNNB<$ooS0Ho z1N^3BurODZ#=vWxF>CGf3Ix+IIf0n+go^FJy*Muv`(T+RSTr4iVnd*kcOe!6uo&sa z|JZEf7eY~PVppnAJ={rkP}{?&+dU(7l-H0A6?O2Z6kk}3q|Pw5%dvP3vJeLn(%1)r zwC4Kh;Y2ju5VblY@G#oN7!H6pJ!sb?uNNUF;0K0F50_wFxgY$yh5-}#Y8R3yT7pXi z4kTusEE5WQ2q7^5auB4Rjd1SaPx&-0w2bb%dL4^YVI-<{U3X`ih6}ygv)Y=eIPMVU zL{Hi0glJ9cf06LTg=hD(i_n*eBpjxuY_a8qTqEp6jM%}QG)8d)>gTp!U;}4AjWK_x z2`8Hc#P5iOrd(svTjmultd#yL;F%{3(qS*8B67W2JqC^pF+UvhIAFiqnl4RM{@p@C z$xYzgFJMgs3Von$H*-b|B; z+gsPNz7d~Tm%Vw|NF9)^VRuX3lL~#l$B47VLVza7N8kaB)|7ibq~@o>X?rS309QX2 zubX-PS(n6Y9nZt>*Vz{Q9N)(wJ*YqKl{3<(QgCK)p@^GD@6ObJS2eQp(DoyeF9l=q zLoz`!g-TQtMsWSjzXWkZ&Ob##iW7Wdi>MIp#t1WTK$GbM@R%l||4q61^0s+}hbm5N zeL8p^?d4MxtVWhx{@<8I+vRN9srTY9)2@+6CbAyIg~&V58Zwn> zN0c4@ukS0_CZfeJ$&n=yUycaM-54o`BA-s6mD{K10ry-!vC{vSJsL;amsYDAqsSEa zSG}jb2lMu)aFcvcVh70N{hUJ=Ac92a-P8bDWh;#CtrN6c0$JToWn1V76uzd%VPeUI z&WL_Pk-T=5fivXKojx5~hi8gdiCiV#U{G6o3SX<+KVK_`kG|o@2FrJlKkS7Q1Dam% z9hQ1>37}RkWIv#k0ecooKtMfE^`l8<+hhD@o_s}{zYZhNC*T3^F5neOhQ#_(e1QwN z2m!P!>4d=#t6CSL1hQk2Aujee4WBS`e{w;UR|87wmSeDnh!7v$`V!++O}oBRUz-_zjyvih65sG?k!mWj-=fFZHjAFW?G5%9zM(;23~reH8Lj z6n(Z)6x=Jym!rEo>&j^2RYgSio&_xX_MB$O=9ebGawIzNlKG8DlrX&nPuRK*nZt3J zYwO5)x&%A7^(%f86uY^(D6`G4g5)_G@B70hzkE88VXJWehh*Q_$lx8jj4zS77jL)_^RyF!4wNJ&o~Fyi`y5 z^z;f+nLjqK{kRG3$3pGETFWnPuU_0*02~kYiSL}bPP<#pI!gfe+810F{!q6>jc~2O z#MmF^*Q-5>>*i(;z_sp=W@hL;*^F16ka9=p#lOD>hZ_8!<>ZF!_#yb-kZL=FP{S^e7ko6_z+wWUZ?odvoA2&qp zO+W_b9Y;|oD1ub~ErNKI_HILy2Z+h>&mS#lM`4JF{_xZM`wo-!V`%a~dZzjLpWD+g z;$4pX7xc(MLD%!-fGQ$+iWdn#gmau<4+NR;H^-L4GgS||=FYnk7DJ%feIiP31VGoh zzNPoOn>V%s(UerxQ-jUqFMU9}j6%MUgs-{&jg z!o2^_Pn_bwR*V4IJ10e`(E;lVaUAua_~=DXCjx0dfsRMkFk&NC^ocvPTNQn1qjH>; zS)}2gSwpe{Dw=A~GRn4d-e_lbgF*?W=6bMTd4`CgBTx!Vg2G7l(j#N>Q{J-|AhJ#R z(fw3T*nHeT3+$(|fkE(r>i%G^MeTdBP~6u-BLJu)I$hs+B_?F9*OW zHw@l$R9r1U+|Z}TseJ%i`i9zUv+5#1wao*gcRcIVb+<17@;idqrx3ztc=_M5W$-5y zpY_m%eriesczjzKU+?br*lsPT*8ZHMdjJkgv&O{MMbPJXK|A!!t|%YM(nZ+rcydGF z2x=(Ai3YeSl)+`&7m@D*Hm&)8KL{7JRoptqUX=3_{?Gk+_#lR=miS?NJ~6&Ney2O@BOh<12^!=eZ8fjjTNo=%Lyn@2#pM#;55TCXfd^x=#_c-1 zZfPRRpd5bz#{8jeRLdc10QAKt5q)+yw?`4aY1A)1cfsUN!tt|uJ!TDBy*h;T(M4v3 zNAN`=D9a&~4p<}givRL^_>$|NmYy3DX{x}DL6|-7pvcapAQ4k%qM~kk`|MSFGa^l{7vu+i{sr@Uca%yqB$_IC9vbAM;kUf)AJT}Y!!kFJ&gD1s zv3Qx4D*-QY(pfD+{`2r|!v$M&aU3C|yiNt+7}?3zxH`+=Hu7x0s#c zl%r(!D~uL>9BI;oWN+)*zrbQu{6uPL6)x?*#)SICy6&dw1bmjp7;p9M{8zf|2>gH(0N!>(t6hYu zfX^LtdQNKr2PULiox8LU3%CQ-q0lFy*L4B*Pd;YwC2rq`N+|B1k13XI=ra}!KB*hV ztvZ)}z@GiQcrEli+xL6W*7^xt+i46eiSd+JqslqpqIsdpD#SHVXUQsO1^S*L|G0zV z^9t@z^wc6FI+~;|Lhb0wrygJKlCm!yJ`a0#tXEXb8DE}Cb(B$iTD`vuAcH>;-E#r2OzWc!8>--HPNa%-_R8i)8?0jm2B!GhM zRkjCqO^d{nR-^pgZZAihp)~Ze>;*AuGPmU)UNLearMG48z4ahKUoRp|+*`R&lBXv| zLZaWKOVfQ5Hl`C`akNQqqfWW-#C-5`s-vg@E)S+^L$ZB=Uk$n=>O@f^xhJW6lzxo6AVu3 z;YAetYpi4M{Ewdw9r$JoEVFq@Cd`p_eEphew#2$a&Mx}=*i^PyJi_j$QL&X6@d-SY zcIxlM>K2k`3jLG&-m+lu%8=$GlR}{nZI$uYiV>9?{qy-M$Juew?B{cvCGE7D8I5rg0jUPJ|;qq40wlZ%xT!yqTja;i3`BI6Pl4m0{i@zbg@SH`>h+AJ4A3xE{wbLHa~$|* zJTP15-K-Dy1GRh}8}ab?c!1)T9lfuUd;on$i@AiI=a+B^)JQ}KcJ0%Xnaho*O$d|~j zF>|hRxz|C8*$DCyT{Sc&UsXjO^l5gQ#WrceFs4V09a1K}qhjSK!uWc-G(Zdw5Ce3$ zP}2~6B=1M!%h_{aK%D(VoyO=aN+(~3o?++$H>tQzw-uA_#2{pCzHV}u@{8XP!BJ2= zzLhzvde8QX5XH6Qbu+wlTy_r@=~_szW=y#@XX(C|EO0AbEW7-n$=3t_xrMwQ3nQuB z0sjlz8QEv-jr6ZZHFrY0dv7AfroQ(Md~YuQ`QGqy!Un4<*fW&cRp{*s0(aH|_M`(|MZhqIm+VY@#Xlq>N1M72>C#TL%7s71L zY#)lMg60~I(!jMVy`~v|kJpBEgurlh-Xo}4BpHSPr%yV^N~pI;3qz>MKSg9R?=74A z{TunWXkJ|{8usWGH`YICE_4-KphOBvQn({t+9|iogF$va2=-S#jFj!R%Kc>~eK20@ z%+kb2Q@gzvuIGKWN#0#VIR))naZVq*AlEBX8fvDE4LD3(2a(Rsf^MSZOM_#dAShQR z`BW(P_hkw9Ztl$w`--^eT5cnqkY$R39PJ0JAHerZ`=VQ^<3GIq1Ieisrl$<>u4?+O z8YjNTP4o7?_7o7Meo1w!nbi8?izxhW$)LXtN z)2G8m@;vThQUOkK0u9p2Xej4xgY^B2(}bKO+Tq!nkbbmT5oS9sLm!|C z`oR^njB{q!VX}ujWf_W?ZXN3^G{Nkg=P6NXWWt{T7+wYAS)Q3@&l7iJ>V*3xj&7WA z{w(GT+pc<4MC_!AoJN(FOKz@Wpy`K^eIadgJSv}n*%QcsI&O+gJiasx1wZ2XpX=El zsT`?Qo_p2LTTM1s&u5e;yb*|md;#U7&kF?G!aE9INdjI}0-Whib*)txAAz~(e0Cxw zs?r~iA@xAZLBFG>CsY<*-U!`4mzQ87e49Bhy2_$^R*j@oLuPWUZ~DgImI zwgku1^HBOe=gZ2u&tQ;T+LAMGlf|+`LQMLiagZrduUUvIOqx_{Wl1QcV^d>d#I|Ca z+J%{IGC1*#(wk1$4?bSS62o1^c@2C2?S`Q2|Ib=)Z(@o6XV!<>f&W7ri>8eHxPKZo uR3-EOA9_$Yt^aQk!2jR= num_select self.max_epochs = max_epochs self.num_select = num_select diff --git a/src/sdk/pynni/nni/nas/pytorch/spos/mutator.py b/src/sdk/pynni/nni/nas/pytorch/spos/mutator.py index 838f2fcd05..bcf3b59979 100644 --- a/src/sdk/pynni/nni/nas/pytorch/spos/mutator.py +++ b/src/sdk/pynni/nni/nas/pytorch/spos/mutator.py @@ -10,27 +10,29 @@ class SPOSSupernetTrainingMutator(RandomMutator): + """ + A random mutator with flops limit. + + Parameters + ---------- + model : nn.Module + PyTorch model. + flops_func : callable + Callable that takes a candidate from `sample_search` and returns its candidate. When `flops_func` + is None, functions related to flops will be deactivated. + flops_lb : number + Lower bound of flops. + flops_ub : number + Upper bound of flops. + flops_bin_num : number + Number of bins divided for the interval of flops to ensure the uniformity. Bigger number will be more + uniform, but the sampling will be slower. + flops_sample_timeout : int + Maximum number of attempts to sample before giving up and use a random candidate. + """ def __init__(self, model, flops_func=None, flops_lb=None, flops_ub=None, flops_bin_num=7, flops_sample_timeout=500): - """ - Parameters - ---------- - model : nn.Module - PyTorch model. - flops_func : callable - Callable that takes a candidate from `sample_search` and returns its candidate. When `flops_func` - is None, functions related to flops will be deactivated. - flops_lb : number - Lower bound of flops. - flops_ub : number - Upper bound of flops. - flops_bin_num : number - Number of bins divided for the interval of flops to ensure the uniformity. Bigger number will be more - uniform, but the sampling will be slower. - flops_sample_timeout : int - Maximum number of attempts to sample before giving up and use a random candidate. - """ super().__init__(model) self._flops_func = flops_func if self._flops_func is not None: diff --git a/src/sdk/pynni/nni/nas/pytorch/spos/trainer.py b/src/sdk/pynni/nni/nas/pytorch/spos/trainer.py index 3b5e69f8cd..3b2e349466 100644 --- a/src/sdk/pynni/nni/nas/pytorch/spos/trainer.py +++ b/src/sdk/pynni/nni/nas/pytorch/spos/trainer.py @@ -15,43 +15,42 @@ class SPOSSupernetTrainer(Trainer): """ This trainer trains a supernet that can be used for evolution search. + + Parameters + ---------- + model : nn.Module + Model with mutables. + mutator : Mutator + A mutator object that has been initialized with the model. + loss : callable + Called with logits and targets. Returns a loss tensor. + metrics : callable + Returns a dict that maps metrics keys to metrics data. + optimizer : Optimizer + Optimizer that optimizes the model. + num_epochs : int + Number of epochs of training. + train_loader : iterable + Data loader of training. Raise ``StopIteration`` when one epoch is exhausted. + dataset_valid : iterable + Data loader of validation. Raise ``StopIteration`` when one epoch is exhausted. + batch_size : int + Batch size. + workers: int + Number of threads for data preprocessing. Not used for this trainer. Maybe removed in future. + device : torch.device + Device object. Either ``torch.device("cuda")`` or ``torch.device("cpu")``. When ``None``, trainer will + automatic detects GPU and selects GPU first. + log_frequency : int + Number of mini-batches to log metrics. + callbacks : list of Callback + Callbacks to plug into the trainer. See Callbacks. """ def __init__(self, model, loss, metrics, optimizer, num_epochs, train_loader, valid_loader, mutator=None, batch_size=64, workers=4, device=None, log_frequency=None, callbacks=None): - """ - Parameters - ---------- - model : nn.Module - Model with mutables. - mutator : Mutator - A mutator object that has been initialized with the model. - loss : callable - Called with logits and targets. Returns a loss tensor. - metrics : callable - Returns a dict that maps metrics keys to metrics data. - optimizer : Optimizer - Optimizer that optimizes the model. - num_epochs : int - Number of epochs of training. - train_loader : iterable - Data loader of training. Raise ``StopIteration`` when one epoch is exhausted. - dataset_valid : iterable - Data loader of validation. Raise ``StopIteration`` when one epoch is exhausted. - batch_size : int - Batch size. - workers: int - Number of threads for data preprocessing. Not used for this trainer. Maybe removed in future. - device : torch.device - Device object. Either ``torch.device("cuda")`` or ``torch.device("cpu")``. When ``None``, trainer will - automatic detects GPU and selects GPU first. - log_frequency : int - Number of mini-batches to log metrics. - callbacks : list of Callback - Callbacks to plug into the trainer. See Callbacks. - """ assert torch.cuda.is_available() super().__init__(model, mutator if mutator is not None else SPOSSupernetTrainingMutator(model), loss, metrics, optimizer, num_epochs, None, None, diff --git a/src/sdk/pynni/nni/nas/pytorch/trainer.py b/src/sdk/pynni/nni/nas/pytorch/trainer.py index 32888d9bf9..e6ea4be153 100644 --- a/src/sdk/pynni/nni/nas/pytorch/trainer.py +++ b/src/sdk/pynni/nni/nas/pytorch/trainer.py @@ -24,42 +24,54 @@ def default(self, o): # pylint: disable=method-hidden class Trainer(BaseTrainer): + """ + A trainer with some helper functions implemented. To implement a new trainer, + users need to implement :meth:`train_one_epoch`, :meth:`validate_one_epoch` and :meth:`checkpoint`. + + Parameters + ---------- + model : nn.Module + Model with mutables. + mutator : BaseMutator + A mutator object that has been initialized with the model. + loss : callable + Called with logits and targets. Returns a loss tensor. + See `PyTorch loss functions`_ for examples. + metrics : callable + Called with logits and targets. Returns a dict that maps metrics keys to metrics data. For example, + + .. code-block:: python + + def metrics_fn(output, target): + return {"acc1": accuracy(output, target, topk=1), "acc5": accuracy(output, target, topk=5)} + + optimizer : Optimizer + Optimizer that optimizes the model. + num_epochs : int + Number of epochs of training. + dataset_train : torch.utils.data.Dataset + Dataset of training. If not otherwise specified, ``dataset_train`` and ``dataset_valid`` should be standard + PyTorch Dataset. See `torch.utils.data`_ for examples. + dataset_valid : torch.utils.data.Dataset + Dataset of validation/testing. + batch_size : int + Batch size. + workers : int + Number of workers used in data preprocessing. + device : torch.device + Device object. Either ``torch.device("cuda")`` or ``torch.device("cpu")``. When ``None``, trainer will + automatic detects GPU and selects GPU first. + log_frequency : int + Number of mini-batches to log metrics. + callbacks : list of Callback + Callbacks to plug into the trainer. See Callbacks. + + + .. _`PyTorch loss functions`: https://pytorch.org/docs/stable/nn.html#loss-functions + .. _`torch.utils.data`: https://pytorch.org/docs/stable/data.html + """ def __init__(self, model, mutator, loss, metrics, optimizer, num_epochs, dataset_train, dataset_valid, batch_size, workers, device, log_frequency, callbacks): - """ - Trainer initialization. - - Parameters - ---------- - model : nn.Module - Model with mutables. - mutator : BaseMutator - A mutator object that has been initialized with the model. - loss : callable - Called with logits and targets. Returns a loss tensor. - metrics : callable - Returns a dict that maps metrics keys to metrics data. - optimizer : Optimizer - Optimizer that optimizes the model. - num_epochs : int - Number of epochs of training. - dataset_train : torch.utils.data.Dataset - Dataset of training. - dataset_valid : torch.utils.data.Dataset - Dataset of validation/testing. - batch_size : int - Batch size. - workers : int - Number of workers used in data preprocessing. - device : torch.device - Device object. Either ``torch.device("cuda")`` or ``torch.device("cpu")``. When ``None``, trainer will - automatic detects GPU and selects GPU first. - log_frequency : int - Number of mini-batches to log metrics. - callbacks : list of Callback - Callbacks to plug into the trainer. See Callbacks. - """ - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") if device is None else device self.model = model self.mutator = mutator @@ -84,13 +96,38 @@ def __init__(self, model, mutator, loss, metrics, optimizer, num_epochs, @abstractmethod def train_one_epoch(self, epoch): + """ + Train one epoch. + + Parameters + ---------- + epoch : int + Epoch number starting from 0. + """ pass @abstractmethod def validate_one_epoch(self, epoch): + """ + Validate one epoch. + + Parameters + ---------- + epoch : int + Epoch number starting from 0. + """ pass def train(self, validate=True): + """ + Train ``num_epochs``. + Trigger callbacks at the start and the end of each epoch. + + Parameters + ---------- + validate : bool + If ``true``, will do validation every epoch. + """ for epoch in range(self.num_epochs): for callback in self.callbacks: callback.on_epoch_begin(epoch) @@ -108,12 +145,26 @@ def train(self, validate=True): callback.on_epoch_end(epoch) def validate(self): + """ + Do one validation. + """ self.validate_one_epoch(-1) def export(self, file): + """ + Call ``mutator.export()`` and dump the architecture to ``file``. + + Parameters + ---------- + file : str + A file path. Expected to be a JSON. + """ mutator_export = self.mutator.export() with open(file, "w") as f: json.dump(mutator_export, f, indent=2, sort_keys=True, cls=TorchTensorEncoder) def checkpoint(self): + """ + Return trainer checkpoint. + """ raise NotImplementedError("Not implemented yet") diff --git a/src/sdk/pynni/nni/nas/pytorch/utils.py b/src/sdk/pynni/nni/nas/pytorch/utils.py index 007c28a902..06961f8e80 100644 --- a/src/sdk/pynni/nni/nas/pytorch/utils.py +++ b/src/sdk/pynni/nni/nas/pytorch/utils.py @@ -12,12 +12,18 @@ def global_mutable_counting(): + """ + A program level counter starting from 1. + """ global _counter _counter += 1 return _counter def to_device(obj, device): + """ + Move a tensor, tuple, list, or dict onto device. + """ if torch.is_tensor(obj): return obj.to(device) if isinstance(obj, tuple): @@ -32,12 +38,18 @@ def to_device(obj, device): class AverageMeterGroup: - """Average meter group for multiple average meters""" + """ + Average meter group for multiple average meters. + """ def __init__(self): self.meters = OrderedDict() def update(self, data): + """ + Update the meter group with a dict of metrics. + Non-exist average meters will be automatically created. + """ for k, v in data.items(): if k not in self.meters: self.meters[k] = AverageMeter(k, ":4f") @@ -53,34 +65,49 @@ def __str__(self): return " ".join(str(v) for v in self.meters.values()) def summary(self): + """ + Return a summary string of group data. + """ return " ".join(v.summary() for v in self.meters.values()) class AverageMeter: - """Computes and stores the average and current value""" + """ + Computes and stores the average and current value. + + Parameters + ---------- + name : str + Name to display. + fmt : str + Format string to print the values. + """ def __init__(self, name, fmt=':f'): - """ - Initialization of AverageMeter - - Parameters - ---------- - name : str - Name to display. - fmt : str - Format string to print the values. - """ self.name = name self.fmt = fmt self.reset() def reset(self): + """ + Reset the meter. + """ self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): + """ + Update with value and weight. + + Parameters + ---------- + val : float or int + The new value to be accounted in. + n : int + The weight of the new value. + """ if not isinstance(val, float) and not isinstance(val, int): _logger.warning("Values passed to AverageMeter must be number, not %s.", type(val)) self.val = val @@ -104,6 +131,11 @@ class StructuredMutableTreeNode: This tree can be seen as a "flattened" version of the module tree. Since nested mutable entity is not supported yet, the following must be true: each subtree corresponds to a ``MutableScope`` and each leaf corresponds to a ``Mutable`` (other than ``MutableScope``). + + Parameters + ---------- + mutable : nni.nas.pytorch.mutables.Mutable + The mutable that current node is linked with. """ def __init__(self, mutable): @@ -111,10 +143,16 @@ def __init__(self, mutable): self.children = [] def add_child(self, mutable): + """ + Add a tree node to the children list of current node. + """ self.children.append(StructuredMutableTreeNode(mutable)) return self.children[-1] def type(self): + """ + Return the ``type`` of mutable content. + """ return type(self.mutable) def __iter__(self): From f7cf3ea5d9cfb4a0e87096ff81bc5c9ea595b96b Mon Sep 17 00:00:00 2001 From: QuanluZhang Date: Sat, 8 Feb 2020 22:08:43 +0800 Subject: [PATCH 3/9] Doc update index (#2017) --- docs/en_US/Compressor/Overview.md | 4 +- docs/en_US/autotune_ref.md | 80 ++++++++++++++++++++++++++++++ docs/en_US/feature_engineering.rst | 7 ++- docs/en_US/hyperparameter_tune.rst | 18 +++++-- docs/en_US/index.rst | 5 +- docs/en_US/model_compression.rst | 13 ++--- docs/en_US/nas.rst | 21 ++++---- docs/en_US/pruners.rst | 16 ++++++ docs/en_US/quantizers.rst | 11 ++++ docs/en_US/sdk_reference.rst | 74 +++------------------------ 10 files changed, 151 insertions(+), 98 deletions(-) create mode 100644 docs/en_US/autotune_ref.md create mode 100644 docs/en_US/pruners.rst create mode 100644 docs/en_US/quantizers.rst diff --git a/docs/en_US/Compressor/Overview.md b/docs/en_US/Compressor/Overview.md index 7848b1fc7e..8017817c48 100644 --- a/docs/en_US/Compressor/Overview.md +++ b/docs/en_US/Compressor/Overview.md @@ -1,7 +1,7 @@ # Model Compression with NNI As larger neural networks with more layers and nodes are considered, reducing their storage and computational cost becomes critical, especially for some real-time applications. Model compression can be used to address this problem. -We are glad to announce the alpha release for model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute. +We are glad to introduce model compression toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute. NNI provides an easy-to-use toolkit to help user design and use compression algorithms. It currently supports PyTorch with unified interface. For users to compress their models, they only need to add several lines in their code. There are some popular model compression algorithms built-in in NNI. Users could further use NNI's auto tuning power to find the best compressed model, which is detailed in [Auto Model Compression](./AutoCompression.md). On the other hand, users could easily customize their new compression algorithms using NNI's interface, refer to the tutorial [here](#customize-new-compression-algorithms). @@ -335,7 +335,7 @@ class YourQuantizer(Quantizer): If you do not customize `QuantGrad`, the default backward is Straight-Through Estimator. _Coming Soon_ ... -## **Reference and Feedback** +## Reference and Feedback * To [report a bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md) for this feature in GitHub; * To [file a feature or improvement request](https://github.com/microsoft/nni/issues/new?template=enhancement.md) for this feature in GitHub; * To know more about [Feature Engineering with NNI](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md); diff --git a/docs/en_US/autotune_ref.md b/docs/en_US/autotune_ref.md new file mode 100644 index 0000000000..9c8857b570 --- /dev/null +++ b/docs/en_US/autotune_ref.md @@ -0,0 +1,80 @@ +# Python API Reference of Auto Tune + +```eval_rst +.. contents:: +``` + +## Trial + +```eval_rst +.. autofunction:: nni.get_next_parameter +.. autofunction:: nni.get_current_parameter +.. autofunction:: nni.report_intermediate_result +.. autofunction:: nni.report_final_result +.. autofunction:: nni.get_experiment_id +.. autofunction:: nni.get_trial_id +.. autofunction:: nni.get_sequence_id +``` + +## Tuner + +```eval_rst +.. autoclass:: nni.tuner.Tuner + :members: + +.. autoclass:: nni.hyperopt_tuner.hyperopt_tuner.HyperoptTuner + :members: + +.. autoclass:: nni.evolution_tuner.evolution_tuner.EvolutionTuner + :members: + +.. autoclass:: nni.smac_tuner.SMACTuner + :members: + +.. autoclass:: nni.gridsearch_tuner.GridSearchTuner + :members: + +.. autoclass:: nni.networkmorphism_tuner.networkmorphism_tuner.NetworkMorphismTuner + :members: + +.. autoclass:: nni.metis_tuner.metis_tuner.MetisTuner + :members: + +.. autoclass:: nni.ppo_tuner.PPOTuner + :members: + +.. autoclass:: nni.batch_tuner.batch_tuner.BatchTuner + :members: + +.. autoclass:: nni.gp_tuner.gp_tuner.GPTuner + :members: +``` + +## Assessor + +```eval_rst +.. autoclass:: nni.assessor.Assessor + :members: + +.. autoclass:: nni.assessor.AssessResult + :members: + +.. autoclass:: nni.curvefitting_assessor.CurvefittingAssessor + :members: + +.. autoclass:: nni.medianstop_assessor.MedianstopAssessor + :members: +``` + +## Advisor + +```eval_rst +.. autoclass:: nni.msg_dispatcher_base.MsgDispatcherBase + :members: + +.. autoclass:: nni.hyperband_advisor.hyperband_advisor.Hyperband + :members: + +.. autoclass:: nni.bohb_advisor.bohb_advisor.BOHB + :members: +``` diff --git a/docs/en_US/feature_engineering.rst b/docs/en_US/feature_engineering.rst index 6c804ad50e..a2b2afda20 100644 --- a/docs/en_US/feature_engineering.rst +++ b/docs/en_US/feature_engineering.rst @@ -1,13 +1,16 @@ +################### Feature Engineering -=================== +################### -We are glad to announce the alpha release for Feature Engineering toolkit on top of NNI, +We are glad to introduce Feature Engineering toolkit on top of NNI, it's still in the experiment phase which might evolve based on usage feedback. We'd like to invite you to use, feedback and even contribute. For details, please refer to the following tutorials: .. toctree:: + :maxdepth: 2 + Overview GradientFeatureSelector GBDTSelector diff --git a/docs/en_US/hyperparameter_tune.rst b/docs/en_US/hyperparameter_tune.rst index f7e55f89ab..d49c3109af 100644 --- a/docs/en_US/hyperparameter_tune.rst +++ b/docs/en_US/hyperparameter_tune.rst @@ -1,6 +1,18 @@ -###################### -Hyper-parameter Tuning -###################### +############################# +Auto (Hyper-parameter) Tuning +############################# + +Auto tuning is one of the key features provided by NNI, a main application scenario is +hyper-parameter tuning. Trial code is the one to be tuned, we provide a lot of popular +auto tuning algorithms (called Tuner), and some early stop algorithms (called Assessor). +NNI supports running trial on various training platforms, for example, on a local machine, +on several servers in a distributed manner, or on platforms such as OpenPAI, Kubernetes. + +Other key features of NNI, such as model compression, feature engineering, can also be further +enhanced by auto tuning, which is described when introduing those features. + +NNI has high extensibility, advanced users could customized their own Tuner, Assessor, and Training Service +according to their needs. .. toctree:: :maxdepth: 2 diff --git a/docs/en_US/index.rst b/docs/en_US/index.rst index 2526188371..d40792a74a 100644 --- a/docs/en_US/index.rst +++ b/docs/en_US/index.rst @@ -2,9 +2,6 @@ Neural Network Intelligence ########################### -******** -Contents -******** .. toctree:: :caption: Table of Contents @@ -14,7 +11,7 @@ Contents Overview Installation QuickStart - Hyper-parameter Tuning + Auto (Hyper-parameter) Tuning Neural Architecture Search Model Compression Feature Engineering diff --git a/docs/en_US/model_compression.rst b/docs/en_US/model_compression.rst index 61caf4d8d8..e87d9e1c43 100644 --- a/docs/en_US/model_compression.rst +++ b/docs/en_US/model_compression.rst @@ -13,14 +13,9 @@ On the other hand, users could easily customize their new compression algorithms For details, please refer to the following tutorials: .. toctree:: + :maxdepth: 2 + Overview - Level Pruner - AGP Pruner - L1Filter Pruner - Slim Pruner - Lottery Ticket Pruner - FPGM Pruner - Naive Quantizer - QAT Quantizer - DoReFa Quantizer + Pruners + Quantizers Automatic Model Compression diff --git a/docs/en_US/nas.rst b/docs/en_US/nas.rst index 6f2fb05bbd..b04f3a9e70 100644 --- a/docs/en_US/nas.rst +++ b/docs/en_US/nas.rst @@ -1,26 +1,27 @@ -############## -NAS Algorithms -############## +########################## +Neural Architecture Search +########################## Automatic neural architecture search is taking an increasingly important role on finding better models. -Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. -Some of representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. There are new innovations keeping emerging. +Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually tuned models. +Some of representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. Moreover, new innovations keep emerging. -However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in new one. +However, it takes great efforts to implement NAS algorithms, and it is hard to reuse code base of existing algorithms in a new one. To facilitate NAS innovations (e.g., design and implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexible programming interface is crucial. -With this motivation, our ambition is to provide a unified architecture in NNI, +Therefore, we provide a unified interface for NAS, to accelerate innovations on NAS, and apply state-of-art algorithms on real world problems faster. - For details, please refer to the following tutorials: .. toctree:: + :maxdepth: 2 + Overview - Guide - API Reference + Tutorial ENAS DARTS P-DARTS SPOS CDARTS + API Reference diff --git a/docs/en_US/pruners.rst b/docs/en_US/pruners.rst new file mode 100644 index 0000000000..bf3771df16 --- /dev/null +++ b/docs/en_US/pruners.rst @@ -0,0 +1,16 @@ +############################ +Supported Pruning Algorithms +############################ + +.. toctree:: + :maxdepth: 1 + + Level Pruner + AGP Pruner + Lottery Ticket Pruner + FPGM Pruner + L1Filter Pruner + L2Filter Pruner + ActivationAPoZRankFilterPruner + ActivationMeanRankFilterPruner + Slim Pruner diff --git a/docs/en_US/quantizers.rst b/docs/en_US/quantizers.rst new file mode 100644 index 0000000000..8b082c2789 --- /dev/null +++ b/docs/en_US/quantizers.rst @@ -0,0 +1,11 @@ +################################# +Supported Quantization Algorithms +################################# + +.. toctree:: + :maxdepth: 1 + + Naive Quantizer + QAT Quantizer + DoReFa Quantizer + BNN Quantizer \ No newline at end of file diff --git a/docs/en_US/sdk_reference.rst b/docs/en_US/sdk_reference.rst index 6b4d6d8d79..49d47a2ffd 100644 --- a/docs/en_US/sdk_reference.rst +++ b/docs/en_US/sdk_reference.rst @@ -1,72 +1,10 @@ -########################### +#################### Python API Reference -########################### +#################### -Trial ------------------------- -.. autofunction:: nni.get_next_parameter -.. autofunction:: nni.get_current_parameter -.. autofunction:: nni.report_intermediate_result -.. autofunction:: nni.report_final_result -.. autofunction:: nni.get_experiment_id -.. autofunction:: nni.get_trial_id -.. autofunction:: nni.get_sequence_id +.. toctree:: + :maxdepth: 1 -Tuner ------------------------- -.. autoclass:: nni.tuner.Tuner - :members: - -.. autoclass:: nni.hyperopt_tuner.hyperopt_tuner.HyperoptTuner - :members: - -.. autoclass:: nni.evolution_tuner.evolution_tuner.EvolutionTuner - :members: - -.. autoclass:: nni.smac_tuner.SMACTuner - :members: - -.. autoclass:: nni.gridsearch_tuner.GridSearchTuner - :members: - -.. autoclass:: nni.networkmorphism_tuner.networkmorphism_tuner.NetworkMorphismTuner - :members: - -.. autoclass:: nni.metis_tuner.metis_tuner.MetisTuner - :members: - -.. autoclass:: nni.ppo_tuner.PPOTuner - :members: - -.. autoclass:: nni.batch_tuner.batch_tuner.BatchTuner - :members: - -.. autoclass:: nni.gp_tuner.gp_tuner.GPTuner - :members: - -Assessor ------------------------- -.. autoclass:: nni.assessor.Assessor - :members: - -.. autoclass:: nni.assessor.AssessResult - :members: - -.. autoclass:: nni.curvefitting_assessor.CurvefittingAssessor - :members: - -.. autoclass:: nni.medianstop_assessor.MedianstopAssessor - :members: - - -Advisor ------------------------- -.. autoclass:: nni.msg_dispatcher_base.MsgDispatcherBase - :members: - -.. autoclass:: nni.hyperband_advisor.hyperband_advisor.Hyperband - :members: - -.. autoclass:: nni.bohb_advisor.bohb_advisor.BOHB - :members: + Auto Tune + NAS \ No newline at end of file From e9f137f05d22362fc6430c8aaef9440872a85e8a Mon Sep 17 00:00:00 2001 From: QuanluZhang Date: Sun, 9 Feb 2020 19:06:17 +0800 Subject: [PATCH 4/9] merge from master (#2019) --- README.md | 79 +++------ README_zh_CN.md | 99 ++++------- azure-pipelines.yml | 10 +- .../CommunitySharings/NNI_AutoFeatureEng.md | 99 +++++++++++ .../CommunitySharings/community_sharings.rst | 1 + docs/en_US/TrainingService/PaiMode.md | 44 ++++- docs/en_US/TrainingService/PaiYarnMode.md | 6 +- .../TrainingService/RemoteMachineMode.md | 40 +++-- .../TrainingService/SupportTrainingService.md | 6 +- docs/en_US/Tuner/HyperbandAdvisor.md | 2 +- docs/en_US/Tutorial/InstallationLinux.md | 116 ++++++++---- docs/en_US/Tutorial/InstallationWin.md | 89 +++++++--- docs/en_US/Tutorial/Nnictl.md | 4 +- docs/en_US/Tutorial/QuickStart.md | 28 +-- docs/en_US/reference.rst | 2 + docs/img/pai_data_management_page.jpg | Bin 0 -> 226121 bytes docs/img/pai_job_submission_page.jpg | Bin 0 -> 127488 bytes docs/img/pai_token_button.jpg | Bin 0 -> 16503 bytes docs/img/pai_token_profile.jpg | Bin 0 -> 55722 bytes .../CommunitySharings/NNI_AutoFeatureEng.md | 88 ++++++++++ .../CommunitySharings/community_sharings.rst | 1 + docs/zh_CN/Compressor/Pruner.md | 2 - docs/zh_CN/Compressor/Quantizer.md | 11 +- docs/zh_CN/NAS/CDARTS.md | 61 +++++++ docs/zh_CN/NAS/DARTS.md | 46 ++++- docs/zh_CN/NAS/ENAS.md | 43 ++++- docs/zh_CN/NAS/NasInterface.md | 2 +- docs/zh_CN/NAS/Overview.md | 84 ++------- docs/zh_CN/Release.md | 142 +++++++++------ docs/zh_CN/TrainingService/PaiYarnMode.md | 2 +- .../TrainingService/RemoteMachineMode.md | 40 +++-- .../TrainingService/SupportTrainingService.md | 31 ++-- docs/zh_CN/TrialExample/EfficientNet.md | 21 +++ docs/zh_CN/TrialExample/KDExample.md | 2 +- docs/zh_CN/TrialExample/SklearnExamples.md | 6 +- docs/zh_CN/Tutorial/FAQ.md | 4 + docs/zh_CN/Tutorial/HowToDebug.md | 2 +- docs/zh_CN/Tutorial/Installation.md | 165 ++++++++++++------ docs/zh_CN/Tutorial/Nnictl.md | 2 + docs/zh_CN/Tutorial/QuickStart.md | 45 ++--- docs/zh_CN/conf.py | 5 +- docs/zh_CN/examples.rst | 2 + docs/zh_CN/model_compression.rst | 2 +- docs/zh_CN/nas.rst | 4 +- docs/zh_CN/training_services.rst | 1 + .../auto-feature-engineering/README_zh_CN.md | 9 +- examples/trials/auto-gbdt/config_pai.yml | 7 +- examples/trials/auto-gbdt/config_paiYarn.yml | 32 ++++ .../trials/cifar10_pytorch/config_pai.yml | 7 +- .../trials/cifar10_pytorch/config_paiYarn.yml | 32 ++++ examples/trials/efficientnet/README_zh_CN.md | 20 +-- examples/trials/efficientnet/config_pai.yml | 5 +- .../trials/efficientnet/config_paiYarn.yml | 28 +++ examples/trials/ga_squad/config_pai.yml | 7 +- examples/trials/ga_squad/config_paiYarn.yml | 32 ++++ examples/trials/mnist-advisor/config_pai.yml | 7 +- .../trials/mnist-advisor/config_paiYarn.yml | 36 ++++ .../trials/mnist-annotation/config_pai.yml | 7 +- .../mnist-annotation/config_paiYarn.yml | 31 ++++ .../mnist-batch-tune-keras/config_pai.yml | 7 +- .../mnist-batch-tune-keras/config_paiYarn.yml | 29 +++ examples/trials/mnist-keras/config_pai.yml | 7 +- .../trials/mnist-keras/config_paiYarn.yml | 32 ++++ examples/trials/mnist-pytorch/config_pai.yml | 7 +- .../trials/mnist-pytorch/config_paiYarn.yml | 32 ++++ examples/trials/mnist-tfv1/config_pai.yml | 7 +- examples/trials/mnist-tfv1/config_paiYarn.yml | 32 ++++ .../trials/nas_cifar10/config_paiYarn_ppo.yml | 31 ++++ .../trials/nas_cifar10/config_pai_ppo.yml | 5 +- .../FashionMNIST/config_pai.yml | 7 +- .../FashionMNIST/config_paiYarn.yml | 39 +++++ .../network_morphism/cifar10/config_pai.yml | 7 +- .../cifar10/config_paiYarn.yml | 39 +++++ .../sklearn/classification/config_pai.yml | 7 +- .../sklearn/classification/config_paiYarn.yml | 32 ++++ .../trials/sklearn/regression/config_pai.yml | 7 +- .../sklearn/regression/config_paiYarn.yml | 32 ++++ src/nni_manager/common/log.ts | 43 ++--- src/nni_manager/main.ts | 30 +++- src/nni_manager/package.json | 1 + .../rest_server/restValidationSchemas.ts | 1 + .../pai/paiK8S/paiK8SConfig.ts | 4 +- .../pai/paiK8S/paiK8STrainingService.ts | 21 ++- src/nni_manager/yarn.lock | 5 + .../pynni/nni/compression/torch/pruners.py | 2 +- src/sdk/pynni/nni/medianstop_assessor/test.py | 6 +- .../nni/nas/pytorch/classic_nas/mutator.py | 9 + .../pynni/nni/nas/pytorch/darts/mutator.py | 33 ++-- src/sdk/pynni/nni/nas/pytorch/fixed.py | 12 +- src/sdk/pynni/nni/nas/pytorch/utils.py | 8 + .../tests/models/pytorch_models/__init__.py | 6 + .../models/pytorch_models/mutable_scope.py | 95 ++++++++++ .../tests/models/pytorch_models/naive.py | 45 +++++ .../tests/models/pytorch_models/nested.py | 34 ++++ src/sdk/pynni/tests/test_nas.py | 106 +++++++++++ test/config_test.py | 8 +- test/generate_ts_config.py | 19 +- test/pipelines-it-frameworkcontroller.yml | 55 ++++++ test/pipelines-it-local-windows.yml | 2 +- test/training_service.yml | 26 +++ tools/nni_cmd/config_schema.py | 35 ++-- tools/nni_cmd/launcher.py | 40 +++-- tools/nni_cmd/launcher_utils.py | 46 ++++- tools/nni_cmd/nnictl.py | 4 +- tools/nni_cmd/nnictl_utils.py | 4 +- tools/nni_cmd/ssh_utils.py | 8 +- tools/nni_cmd/tensorboard_utils.py | 6 +- 107 files changed, 2155 insertions(+), 617 deletions(-) create mode 100644 docs/en_US/CommunitySharings/NNI_AutoFeatureEng.md create mode 100644 docs/img/pai_data_management_page.jpg create mode 100644 docs/img/pai_job_submission_page.jpg create mode 100644 docs/img/pai_token_button.jpg create mode 100644 docs/img/pai_token_profile.jpg create mode 100644 docs/zh_CN/CommunitySharings/NNI_AutoFeatureEng.md create mode 100644 docs/zh_CN/NAS/CDARTS.md create mode 100644 docs/zh_CN/TrialExample/EfficientNet.md create mode 100644 examples/trials/auto-gbdt/config_paiYarn.yml create mode 100644 examples/trials/cifar10_pytorch/config_paiYarn.yml create mode 100644 examples/trials/efficientnet/config_paiYarn.yml create mode 100644 examples/trials/ga_squad/config_paiYarn.yml create mode 100644 examples/trials/mnist-advisor/config_paiYarn.yml create mode 100644 examples/trials/mnist-annotation/config_paiYarn.yml create mode 100644 examples/trials/mnist-batch-tune-keras/config_paiYarn.yml create mode 100644 examples/trials/mnist-keras/config_paiYarn.yml create mode 100644 examples/trials/mnist-pytorch/config_paiYarn.yml create mode 100644 examples/trials/mnist-tfv1/config_paiYarn.yml create mode 100644 examples/trials/nas_cifar10/config_paiYarn_ppo.yml create mode 100644 examples/trials/network_morphism/FashionMNIST/config_paiYarn.yml create mode 100644 examples/trials/network_morphism/cifar10/config_paiYarn.yml create mode 100644 examples/trials/sklearn/classification/config_paiYarn.yml create mode 100644 examples/trials/sklearn/regression/config_paiYarn.yml create mode 100644 src/sdk/pynni/tests/models/pytorch_models/__init__.py create mode 100644 src/sdk/pynni/tests/models/pytorch_models/mutable_scope.py create mode 100644 src/sdk/pynni/tests/models/pytorch_models/naive.py create mode 100644 src/sdk/pynni/tests/models/pytorch_models/nested.py create mode 100644 src/sdk/pynni/tests/test_nas.py create mode 100644 test/pipelines-it-frameworkcontroller.yml diff --git a/README.md b/README.md index 20d84db7b7..e6ecfcd8bb 100644 --- a/README.md +++ b/README.md @@ -167,7 +167,7 @@ Within the following table, we summarized the current NNI capabilities, we are g - + @@ -193,18 +193,18 @@ Within the following table, we summarized the current NNI capabilities, we are g
  • Support TrainingService
  • Implement TrainingService
  • - - + + -## **Install & Verify** +## **Installation** -**Install through pip** +### **Install** -* We support Linux, MacOS and Windows (local, remote and pai mode) in current stage, Ubuntu 16.04 or higher, MacOS 10.14.1 along with Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`. +NNI supports and is tested on Ubuntu >= 16.04, macOS >= 10.14.1, and Windows 10 >= 1809. Simply run the following `pip install` in an environment that has `python 64-bit >= 3.5`. -Linux and MacOS +Linux or macOS ```bash python3 -m pip install --upgrade nni @@ -216,65 +216,39 @@ Windows python -m pip install --upgrade nni ``` -Note: - -* `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges. -* Currently NNI on Windows support local, remote and pai mode. Anaconda or Miniconda is highly recommended to install NNI on Windows. -* If there is any error like `Segmentation fault`, please refer to [FAQ](docs/en_US/Tutorial/FAQ.md) - -**Install through source code** - -* We support Linux (Ubuntu 16.04 or higher), MacOS (10.14.1) and Windows (10.1809) in our current stage. - -Linux and MacOS - -* Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`. - -```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git - cd nni - source install.sh -``` - -Windows - -* Run the following commands in an environment that has `python >=3.5`, `git` and `PowerShell` +If you want to try latest code, please [install NNI](docs/en_US/Tutorial/Installation.md) from source code. -```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git - cd nni - powershell -ExecutionPolicy Bypass -file install.ps1 -``` +For detail system requirements of NNI, please refer to [here](docs/en_US/Tutorial/Installation.md#system-requirements). -For the system requirements of NNI, please refer to [Install NNI](docs/en_US/Tutorial/Installation.md) +Note: -For NNI on Windows, please refer to [NNI on Windows](docs/en_US/Tutorial/NniOnWindows.md) +* If there is any privilege issue, add `--user` to install NNI in the user directory. +* Currently NNI on Windows supports local, remote and pai mode. Anaconda or Miniconda is highly recommended to install NNI on Windows. +* If there is any error like `Segmentation fault`, please refer to [FAQ](docs/en_US/Tutorial/FAQ.md). For FAQ on Windows, please refer to [NNI on Windows](docs/en_US/Tutorial/NniOnWindows.md). -**Verify install** +### **Verify installation** -The following example is an experiment built on TensorFlow. Make sure you have **TensorFlow 1.x installed** before running it. Note that **currently Tensorflow 2.0 is NOT supported**. +The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it. * Download the examples via clone the source code. -```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git -``` - -Linux and MacOS + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` * Run the MNIST example. -```bash - nnictl create --config nni/examples/trials/mnist-tfv1/config.yml -``` + Linux or macOS -Windows + ```bash + nnictl create --config nni/examples/trials/mnist-tfv1/config.yml + ``` -* Run the MNIST example. + Windows -```bash - nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml -``` + ```bash + nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml + ``` * Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`. @@ -371,4 +345,3 @@ We encourage researchers and students leverage these projects to accelerate the ## **License** The entire codebase is under [MIT license](LICENSE) - diff --git a/README_zh_CN.md b/README_zh_CN.md index ec77fcbd50..9aca68dde8 100644 --- a/README_zh_CN.md +++ b/README_zh_CN.md @@ -4,7 +4,7 @@ * * * -[![MIT 许可证](https://img.shields.io/badge/license-MIT-brightgreen.svg)](LICENSE) [![生成状态](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/Microsoft.nni)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=6) [![问题](https://img.shields.io/github/issues-raw/Microsoft/nni.svg)](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen) [![Bug](https://img.shields.io/github/issues/Microsoft/nni/bug.svg)](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3Abug) [![拉取请求](https://img.shields.io/github/issues-pr-raw/Microsoft/nni.svg)](https://github.com/Microsoft/nni/pulls?q=is%3Apr+is%3Aopen) [![版本](https://img.shields.io/github/release/Microsoft/nni.svg)](https://github.com/Microsoft/nni/releases) [![进入 https://gitter.im/Microsoft/nni 聊天室提问](https://badges.gitter.im/Microsoft/nni.svg)](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![文档状态](https://readthedocs.org/projects/nni/badge/?version=latest)](https://nni.readthedocs.io/zh/latest/?badge=latest) +[![MIT 许可证](https://img.shields.io/badge/license-MIT-brightgreen.svg)](LICENSE) [![生成状态](https://msrasrg.visualstudio.com/NNIOpenSource/_apis/build/status/integration-test-local?branchName=master)](https://msrasrg.visualstudio.com/NNIOpenSource/_build/latest?definitionId=17&branchName=master) [![问题](https://img.shields.io/github/issues-raw/Microsoft/nni.svg)](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen) [![Bug](https://img.shields.io/github/issues/Microsoft/nni/bug.svg)](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3Abug) [![拉取请求](https://img.shields.io/github/issues-pr-raw/Microsoft/nni.svg)](https://github.com/Microsoft/nni/pulls?q=is%3Apr+is%3Aopen) [![版本](https://img.shields.io/github/release/Microsoft/nni.svg)](https://github.com/Microsoft/nni/releases) [![进入 https://gitter.im/Microsoft/nni 聊天室提问](https://badges.gitter.im/Microsoft/nni.svg)](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![文档状态](https://readthedocs.org/projects/nni/badge/?version=latest)](https://nni.readthedocs.io/zh/latest/?badge=latest) [English](README.md) @@ -83,6 +83,7 @@ NNI 提供命令行工具以及友好的 WebUI 来管理训练的 Experiment。
  • Auto-gbdt
  • Cifar10-pytorch
  • Scikit-learn
  • +
  • EfficientNet
  • 更多...
    @@ -121,6 +122,7 @@ NNI 提供命令行工具以及友好的 WebUI 来管理训练的 Experiment。
  • ENAS
  • DARTS
  • P-DARTS
  • +
  • CDARTS
  • Network Morphism
  • @@ -160,7 +162,7 @@ NNI 提供命令行工具以及友好的 WebUI 来管理训练的 Experiment。 - + @@ -186,18 +188,18 @@ NNI 提供命令行工具以及友好的 WebUI 来管理训练的 Experiment。
  • 支持训练平台
  • 实现训练平台
  • - - + + -## **安装和验证** +## **安装** -**通过 pip 命令安装** +### **安装** -* 当前支持 Linux,MacOS 和 Windows(本机,远程,OpenPAI 模式),在 Ubuntu 16.04 或更高版本,MacOS 10.14.1 以及 Windows 10.1809 上进行了测试。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。 +NNI 支持并在 Ubuntu >= 16.04, macOS >= 10.14.1, 和 Windows 10 >= 1809 通过了测试。 在 `python 64-bit >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。 -Linux 和 macOS +Linux 或 macOS ```bash python3 -m pip install --upgrade nni @@ -209,65 +211,39 @@ Windows python -m pip install --upgrade nni ``` -注意: - -* 如果需要将 NNI 安装到自己的 home 目录中,可使用 `--user`,这样也不需要任何特殊权限。 -* 目前,Windows 上的 NNI 支持本机,远程和 OpenPAI 模式。 强烈推荐使用 Anaconda 或 Miniconda 在 Windows 上安装 NNI。 -* 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](docs/zh_CN/Tutorial/FAQ.md)。 - -**通过源代码安装** - -* 当前支持 Linux(Ubuntu 16.04 或更高版本),MacOS(10.14.1)以及 Windows 10(1809 版)。 - -Linux 和 MacOS - -* 在 `python >= 3.5` 的环境中运行命令: `git` 和 `wget`,确保安装了这两个组件。 - -```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git - cd nni - source install.sh -``` - -Windows - -* 在 `python >=3.5` 的环境中运行命令: `git` 和 `PowerShell`,确保安装了这两个组件。 +如果想要尝试最新代码,可通过源代码[安装 NNI](docs/zh_CN/Tutorial/Installation.md)。 -```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git - cd nni - powershell -ExecutionPolicy Bypass -file install.ps1 -``` +有关 NNI 的详细系统要求,参考[这里](docs/zh_CN/Tutorial/Installation.md#system-requirements)。 -参考[安装 NNI](docs/zh_CN/Tutorial/Installation.md) 了解系统需求。 +注意: -Windows 上参考 [Windows 上使用 NNI](docs/zh_CN/Tutorial/NniOnWindows.md)。 +* 如果遇到任何权限问题,可添加 `--user` 在用户目录中安装 NNI。 +* 目前,Windows 上的 NNI 支持本机,远程和 OpenPAI 模式。 强烈推荐使用 Anaconda 或 Miniconda 在 Windows 上安装 NNI。 +* 如果遇到如 `Segmentation fault` 等错误参考[常见问题](docs/zh_CN/Tutorial/FAQ.md)。 Windows 上的 FAQ 参考[在 Windows 上使用 NNI](docs/zh_CN/Tutorial/NniOnWindows.md)。 -**验证安装** +### **验证安装** -以下示例 Experiment 依赖于 TensorFlow 。 在运行前确保安装了 **TensorFlow 1.x**。 注意,**目前不支持 TensorFlow 2.0**。 +以下示例基于 TensorFlow 1.x 。确保运行环境中使用的的是 ** TensorFlow 1.x**。 * 通过克隆源代码下载示例。 - -```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git -``` - -Linux 和 MacOS - -* 运行 MNIST 示例。 - -```bash - nnictl create --config nni/examples/trials/mnist-tfv1/config.yml -``` - -Windows + + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` * 运行 MNIST 示例。 - -```bash - nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml -``` + + Linux 或 macOS + + ```bash + nnictl create --config nni/examples/trials/mnist-tfv1/config.yml + ``` + + Windows + + ```bash + nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml + ``` * 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 通过命令行输出的 `Web UI url` 来访问 Experiment 的界面。 @@ -319,11 +295,12 @@ You can use these commands to get more information about the experiment 该项目采用了 [ Microsoft 开源行为准则 ](https://opensource.microsoft.com/codeofconduct/)。 有关详细信息,请参阅[常见问题解答](https://opensource.microsoft.com/codeofconduct/faq/),如有任何疑问或意见可联系 opencode@microsoft.com。 -熟悉贡献协议后,即可按照 NNI 开发人员教程,创建第一个 PR =): +熟悉贡献协议后,即可按照 NNI 开发人员教程,创建第一个 PR: -* 推荐新贡献者先找到标有 ['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) 或 ['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) 标签的 Issue。这些都比较简单,可以从这些问题开始。 +* 推荐新贡献者先从简单的问题开始:['good first issue'](https://github.com/Microsoft/nni/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) 或 ['help-wanted'](https://github.com/microsoft/nni/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22)。 * [NNI 开发环境安装教程](docs/zh_CN/Tutorial/SetupNniDeveloperEnvironment.md) * [如何调试](docs/zh_CN/Tutorial/HowToDebug.md) +* 如果有使用上的问题,可先查看[常见问题解答](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Tutorial/FAQ.md)。如果没能解决问题,可通过 [Gitter](https://gitter.im/Microsoft/nni?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 联系 NNI 开发团队或在 GitHub 上 [报告问题](https://github.com/microsoft/nni/issues/new/choose)。 * [自定义 Tuner](docs/zh_CN/Tuner/CustomizeTuner.md) * [实现定制的训练平台](docs/zh_CN/TrainingService/HowToImplementTrainingService.md) * [在 NNI 上实现新的 NAS Trainer](https://github.com/microsoft/nni/blob/master/docs/zh_CN/NAS/NasInterface.md#implement-a-new-nas-trainer-on-nni) @@ -349,7 +326,7 @@ You can use these commands to get more information about the experiment * [使用 NNI 为 SPTAG 自动调参](docs/zh_CN/CommunitySharings/SptagAutoTune.md) * [使用 NNI 为 scikit-learn 查找超参](https://towardsdatascience.com/find-thy-hyper-parameters-for-scikit-learn-pipelines-using-microsoft-nni-f1015b1224c1) * **博客** - [AutoML 工具(Advisor,NNI 与 Google Vizier)的对比](http://gaocegege.com/Blog/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/katib-new#%E6%80%BB%E7%BB%93%E4%B8%8E%E5%88%86%E6%9E%90) 作者:[@gaocegege](https://github.com/gaocegege) - kubeflow/katib 的设计与实现的总结与分析章节 - * **Blog (中文)** - [NNI 2019 新功能汇总](https://mp.weixin.qq.com/s/7_KRT-rRojQbNuJzkjFMuA) by @squirrelsc + * **博客** - [NNI 2019 新功能汇总](https://mp.weixin.qq.com/s/7_KRT-rRojQbNuJzkjFMuA) by @squirrelsc ## **反馈** diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 3f4238e413..45dc10a976 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -26,8 +26,8 @@ jobs: yarn eslint displayName: 'Run eslint' - script: | - python3 -m pip install torch==0.4.1 --user - python3 -m pip install torchvision==0.2.1 --user + python3 -m pip install torch==1.2.0 --user + python3 -m pip install torchvision==0.4.0 --user python3 -m pip install tensorflow==1.13.1 --user python3 -m pip install keras==2.1.6 --user python3 -m pip install gym onnx --user @@ -91,8 +91,8 @@ jobs: echo "##vso[task.setvariable variable=PATH]${HOME}/Library/Python/3.7/bin:${PATH}" displayName: 'Install nni toolkit via source code' - script: | - python3 -m pip install torch==0.4.1 --user - python3 -m pip install torchvision==0.2.1 --user + python3 -m pip install torch==1.2.0 --user + python3 -m pip install torchvision==0.4.0 --user python3 -m pip install tensorflow==1.13.1 --user ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null brew install swig@3 @@ -131,7 +131,7 @@ jobs: - script: | python -m pip install scikit-learn==0.20.0 --user python -m pip install keras==2.1.6 --user - python -m pip install https://download.pytorch.org/whl/cu90/torch-0.4.1-cp36-cp36m-win_amd64.whl --user + python -m pip install torch===1.2.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html --user python -m pip install torchvision --user python -m pip install tensorflow==1.13.1 --user displayName: 'Install dependencies' diff --git a/docs/en_US/CommunitySharings/NNI_AutoFeatureEng.md b/docs/en_US/CommunitySharings/NNI_AutoFeatureEng.md new file mode 100644 index 0000000000..40a1e2f8c1 --- /dev/null +++ b/docs/en_US/CommunitySharings/NNI_AutoFeatureEng.md @@ -0,0 +1,99 @@ +# NNI review article from Zhihu: - By Garvin Li + +The article is by a NNI user on Zhihu forum. In the article, Garvin had shared his experience on using NNI for Automatic Feature Engineering. We think this article is very useful for users who are interested in using NNI for feature engineering. With author's permission, we translated the original article into English. + +**原文(source)**: [如何看待微软最新发布的AutoML平台NNI?By Garvin Li](https://www.zhihu.com/question/297982959/answer/964961829?utm_source=wechat_session&utm_medium=social&utm_oi=28812108627968&from=singlemessage&isappinstalled=0) + +## 01 Overview of AutoML + +In author's opinion, AutoML is not only about hyperparameter optimization, but +also a process that can target various stages of the machine learning process, +including feature engineering, NAS, HPO, etc. + +## 02 Overview of NNI + +NNI (Neural Network Intelligence) is an open source AutoML toolkit from +Microsoft, to help users design and tune machine learning models, neural network +architectures, or a complex system’s parameters in an efficient and automatic +way. + +Link:[ https://github.com/Microsoft/nni](https://github.com/Microsoft/nni) + +In general, most of Microsoft tools have one prominent characteristic: the +design is highly reasonable (regardless of the technology innovation degree). +NNI's AutoFeatureENG basically meets all user requirements of AutoFeatureENG +with a very reasonable underlying framework design. + +## 03 Details of NNI-AutoFeatureENG + +>The article is following the github project: [https://github.com/SpongebBob/tabular_automl_NNI](https://github.com/SpongebBob/tabular_automl_NNI). + +Each new user could do AutoFeatureENG with NNI easily and efficiently. To exploring the AutoFeatureENG capability, downloads following required files, and then run NNI install through pip. + +![](https://pic3.zhimg.com/v2-8886eea730cad25f5ac06ef1897cd7e4_r.jpg) +NNI treats AutoFeatureENG as a two-steps-task, feature generation exploration and feature selection. Feature generation exploration is mainly about feature derivation and high-order feature combination. + +## 04 Feature Exploration + +For feature derivation, NNI offers many operations which could automatically generate new features, which list [as following](https://github.com/SpongebBob/tabular_automl_NNI/blob/master/AutoFEOp.md) : + +**count**: Count encoding is based on replacing categories with their counts computed on the train set, also named frequency encoding. + +**target**: Target encoding is based on encoding categorical variable values with the mean of target variable per value. + +**embedding**: Regard features as sentences, generate vectors using *Word2Vec.* + +**crosscout**: Count encoding on more than one-dimension, alike CTR (Click Through Rate). + +**aggregete**: Decide the aggregation functions of the features, including min/max/mean/var. + +**nunique**: Statistics of the number of unique features. + +**histsta**: Statistics of feature buckets, like histogram statistics. + +Search space could be defined in a **JSON file**: to define how specific features intersect, which two columns intersect and how features generate from corresponding columns. + +![](https://pic1.zhimg.com/v2-3c3eeec6eea9821e067412725e5d2317_r.jpg) + +The picture shows us the procedure of defining search space. NNI provides count encoding for 1-order-op, as well as cross count encoding, aggerate statistics (min max var mean median nunique) for 2-order-op. + +For example, we want to search the features which are a frequency encoding (valuecount) features on columns name {“C1”, ...,” C26”}, in the following way: + +![](https://github.com/JSong-Jia/Pic/blob/master/images/pic%203.jpg) + +we can define a cross frequency encoding (value count on cross dims) method on columns {"C1",...,"C26"} x {"C1",...,"C26"} in the following way: + +![](https://github.com/JSong-Jia/Pic/blob/master/images/pic%204.jpg) + +The purpose of Exploration is to generate new features. You can use **get_next_parameter** function to get received feature candidates of one trial. + +>RECEIVED_PARAMS = nni.get_next_parameter() + +## 05 Feature selection + +To avoid feature explosion and overfitting, feature selection is necessary. In the feature selection of NNI-AutoFeatureENG, LightGBM (Light Gradient Boosting Machine), a gradient boosting framework developed by Microsoft, is mainly promoted. + +![](https://pic2.zhimg.com/v2-7bf9c6ae1303692101a911def478a172_r.jpg) + +If you have used **XGBoost** or **GBDT**, you would know the algorithm based on tree structure can easily calculate the importance of each feature on results. LightGBM is able to make feature selection naturally. + +The issue is that selected features might be applicable to *GBDT* (Gradient Boosting Decision Tree), but not to the linear algorithm like *LR* (Logistic Regression). + +![](https://pic4.zhimg.com/v2-d2f919497b0ed937acad0577f7a8df83_r.jpg) + +## 06 Summary + +NNI's AutoFeatureEng sets a well-established standard, showing us the operation procedure, available modules, which is highly convenient to use. However, a simple model is probably not enough for good results. + +## Suggestions to NNI + +About Exploration: If consider using DNN (like xDeepFM) to extract high-order feature would be better. + +About Selection: There could be more intelligent options, such as automatic selection system based on downstream models. + +Conclusion: NNI could offer users some inspirations of design and it is a good open source project. I suggest researchers leverage it to accelerate the AI research. + +Tips: Because the scripts of open source projects are compiled based on gcc7, Mac system may encounter problems of gcc (GNU Compiler Collection). The solution is as follows: + +#brew install libomp + diff --git a/docs/en_US/CommunitySharings/community_sharings.rst b/docs/en_US/CommunitySharings/community_sharings.rst index 6938000a9b..23431301c1 100644 --- a/docs/en_US/CommunitySharings/community_sharings.rst +++ b/docs/en_US/CommunitySharings/community_sharings.rst @@ -13,3 +13,4 @@ In addtion to the official tutorilas and examples, we encourage community contri Hyper-parameter Tuning Algorithm Comparsion Parallelizing Optimization for TPE Automatically tune systems with NNI + NNI review article from Zhihu: - By Garvin Li diff --git a/docs/en_US/TrainingService/PaiMode.md b/docs/en_US/TrainingService/PaiMode.md index 3174e1079b..6f5068d320 100644 --- a/docs/en_US/TrainingService/PaiMode.md +++ b/docs/en_US/TrainingService/PaiMode.md @@ -3,7 +3,36 @@ NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka pai), called pai mode. Before starting to use NNI pai mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker. ## Setup environment -Install NNI, follow the install guide [here](../Tutorial/QuickStart.md). +Step 1. Install NNI, follow the install guide [here](../Tutorial/QuickStart.md). + +Step 2. Get PAI token. +Click `My profile` button in the top-right side of PAI's webprotal. +![](../../img/pai_token_button.jpg) +Find the token management region, copy one of the token as your account token. +![](../../img/pai_token_profile.jpg) + +Step 3. Mount NFS storage to local machine. + Click `Submit job` button in PAI's webportal. +![](../../img/pai_job_submission_page.jpg) + Find the data management region in job submission page. +![](../../img/pai_data_management_page.jpg) +The `DEFAULT_STORAGE`field is the path to be mounted in PAI's container when a job is started. The `Preview container paths` is the NFS host and path that PAI provided, you need to mount the corresponding host and path to your local machine first, then NNI could use the PAI's NFS storage. +For example, use the following command: +``` +sudo mount nfs://gcr-openpai-infra02:/pai/data /local/mnt +``` +Then the `/data` folder in container will be mounted to `/local/mnt` folder in your local machine. +You could use the following configuration in your NNI's config file: +``` +nniManagerNFSMountPath: /local/mnt +containerNFSMountPath: /data +``` + +Step 4. Get PAI's storage plugin name. +Contact PAI's admin, and get the PAI's storage plugin name for NFS storage. The default storage name is `teamwise_storage`, the configuration in NNI's config file is in following value: +``` +paiStoragePlugin: teamwise_storage +``` ## Run an experiment Use `examples/trials/mnist-annotation` as an example. The NNI config YAML file's content is like: @@ -37,6 +66,7 @@ trial: virtualCluster: default nniManagerNFSMountPath: /home/user/mnt containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise # Configuration to access OpenPAI Cluster paiConfig: userName: your_pai_nni_user @@ -48,12 +78,12 @@ Note: You should set `trainingServicePlatform: pai` in NNI config YAML file if y Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMode.md), trial configuration in pai mode have these additional keys: * cpuNum - * Required key. Should be positive number based on your trial program's CPU requirement + * Optional key. Should be positive number based on your trial program's CPU requirement. If it is not set in trial configuration, it should be set in the config file specified in `paiConfigPath` field. * memoryMB - * Required key. Should be positive number based on your trial program's memory requirement + * Optional key. Should be positive number based on your trial program's memory requirement. If it is not set in trial configuration, it should be set in the config file specified in `paiConfigPath` field. * image - * Required key. In pai mode, your trial program will be scheduled by OpenPAI to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run. - * We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it. + * Optional key. In pai mode, your trial program will be scheduled by OpenPAI to run in [Docker container](https://www.docker.com/). This key is used to specify the Docker image used to create the container in which your trial will run. + * We already build a docker image [nnimsra/nni](https://hub.docker.com/r/msranni/nni/) on [Docker Hub](https://hub.docker.com/). It contains NNI python packages, Node modules and javascript artifact files required to start experiment, and all of NNI dependencies. The docker file used to build this image can be found at [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile). You can either use this image directly in your config file, or build your own image based on it. If it is not set in trial configuration, it should be set in the config file specified in `paiConfigPath` field. * virtualCluster * Optional key. Set the virtualCluster of OpenPAI. If omitted, the job will run on default virtual cluster. * nniManagerNFSMountPath @@ -61,7 +91,9 @@ Compared with [LocalMode](LocalMode.md) and [RemoteMachineMode](RemoteMachineMod * containerNFSMountPath * Required key. Set the mount path in your container used in PAI. * paiStoragePlugin - * Required key. Set the storage plugin name used in PAI. + * Optional key. Set the storage plugin name used in PAI. If it is not set in trial configuration, it should be set in the config file specified in `paiConfigPath` field. +* paiConfigPath + * Optional key. Set the file path of pai job configuration, the file is in yaml format. Once complete to fill NNI experiment config file and save (for example, save as exp_pai.yml), then run the following command diff --git a/docs/en_US/TrainingService/PaiYarnMode.md b/docs/en_US/TrainingService/PaiYarnMode.md index eb2864f94c..64dda5e465 100644 --- a/docs/en_US/TrainingService/PaiYarnMode.md +++ b/docs/en_US/TrainingService/PaiYarnMode.md @@ -6,7 +6,7 @@ The original `pai` mode is modificated to `paiYarn` mode, which is a distributed Install NNI, follow the install guide [here](../Tutorial/QuickStart.md). ## Run an experiment -Use `examples/trials/mnist-annotation` as an example. The NNI config YAML file's content is like: +Use `examples/trials/mnist-tfv1` as an example. The NNI config YAML file's content is like: ```yaml authorName: your_name @@ -22,14 +22,14 @@ trainingServicePlatform: paiYarn # search space file searchSpacePath: search_space.json # choice: true, false -useAnnotation: true +useAnnotation: false tuner: builtinTunerName: TPE classArgs: optimize_mode: maximize trial: command: python3 mnist.py - codeDir: ~/nni/examples/trials/mnist-annotation + codeDir: ~/nni/examples/trials/mnist-tfv1 gpuNum: 0 cpuNum: 1 memoryMB: 8196 diff --git a/docs/en_US/TrainingService/RemoteMachineMode.md b/docs/en_US/TrainingService/RemoteMachineMode.md index 7e1df06ccc..fb3aeaca7f 100644 --- a/docs/en_US/TrainingService/RemoteMachineMode.md +++ b/docs/en_US/TrainingService/RemoteMachineMode.md @@ -1,24 +1,32 @@ -# Run an Experiment on Multiple Machines +# Run an Experiment on Remote Machines -NNI supports running an experiment on multiple machines through SSH channel, called `remote` mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code. +NNI can run one experiment on multiple remote machines through SSH, called `remote` mode. It's like a lightweight training platform. In this mode, NNI can be started from your computer, and dispatch trials to remote machines in parallel. -e.g. Three machines and you login in with account `bob` (Note: the account is not necessarily the same on different machine): +## Remote machine requirements -| IP | Username| Password | -| -------- |---------|-------| -| 10.1.1.1 | bob | bob123 | -| 10.1.1.2 | bob | bob123 | -| 10.1.1.3 | bob | bob123 | +* It only supports Linux as remote machines, and [linux part in system specification](../Tutorial/InstallationLinux.md) is same as NNI local mode. -## Setup NNI environment +* Follow [installation](../Tutorial/InstallationLinux.md) to install NNI on each machine. -Install NNI on each of your machines following the install guide [here](../Tutorial/QuickStart.md). +* Make sure remote machines meet environment requirements of your trial code. If the default environment does not meet the requirements, the setup script can be added into `command` field of NNI config. + +* Make sure remote machines can be accessed through SSH from the machine which runs `nnictl` command. It supports both password and key authentication of SSH. For advanced usages, please refer to [machineList part of configuration](../Tutorial/ExperimentConfig.md). + +* Make sure the NNI version on each machine is consistent. ## Run an experiment -Install NNI on another machine which has network accessibility to those three machines above, or you can just run `nnictl` on any one of the three to launch the experiment. +e.g. there are three machines, which can be logged in with username and password. + +| IP | Username | Password | +| -------- | -------- | -------- | +| 10.1.1.1 | bob | bob123 | +| 10.1.1.2 | bob | bob123 | +| 10.1.1.3 | bob | bob123 | + +Install and run NNI on one of those three machines or another machine, which has network access to them. -We use `examples/trials/mnist-annotation` as an example here. Shown here is `examples/trials/mnist-annotation/config_remote.yml`: +Use `examples/trials/mnist-annotation` as the example. Below is content of `examples/trials/mnist-annotation/config_remote.yml`: ```yaml authorName: default @@ -58,14 +66,8 @@ machineList: passwd: bob123 ``` -Files in `codeDir` will be automatically uploaded to the remote machine. You can run NNI on different operating systems (Windows, Linux, MacOS) to spawn experiments on the remote machines (only Linux allowed): +Files in `codeDir` will be uploaded to remote machines automatically. You can run below command on Windows, Linux, or macOS to spawn trials on remote Linux machines: ```bash nnictl create --config examples/trials/mnist-annotation/config_remote.yml ``` - -You can also use public/private key pairs instead of username/password for authentication. For advanced usages, please refer to [Experiment Config Reference](../Tutorial/ExperimentConfig.md). - -## Version check - -NNI support version check feature in since version 0.6, [reference](PaiMode.md). \ No newline at end of file diff --git a/docs/en_US/TrainingService/SupportTrainingService.md b/docs/en_US/TrainingService/SupportTrainingService.md index 56c4253aa4..ca2b9283fc 100644 --- a/docs/en_US/TrainingService/SupportTrainingService.md +++ b/docs/en_US/TrainingService/SupportTrainingService.md @@ -8,7 +8,7 @@ NNI not only provides few built-in training service options, but also provides a |TrainingService|Brief Introduction| |---|---| |[__Local__](./LocalMode.md)|NNI supports running an experiment on local machine, called local mode. Local mode means that NNI will run the trial jobs and nniManager process in same machine, and support gpu schedule function for trial jobs.| -|[__Remote__](./RemoteMachineMode.md)|NNI supports running an experiment on multiple machines through SSH channel, called remote mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code. NNI will submit the trial jobs in remote machine, and schedule suitable machine with enouth gpu resource if specified.| +|[__Remote__](./RemoteMachineMode.md)|NNI supports running an experiment on multiple machines through SSH channel, called remote mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code. NNI will submit the trial jobs in remote machine, and schedule suitable machine with enough gpu resource if specified.| |[__Pai__](./PaiMode.md)|NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka pai), called pai mode. Before starting to use NNI pai mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker.| |[__Kubeflow__](./KubeflowMode.md)|NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a Kubernetes cluster, either on-premises or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your Kubernetes cluster. If you are not familiar with Kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as Kubeflow job in Kubernetes cluster.| |[__FrameworkController__](./FrameworkControllerMode.md)|NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install Kubeflow for specific deep learning framework like tf-operator or pytorch-operator. Now you can use FrameworkController as the training service to run NNI experiment.| @@ -17,7 +17,8 @@ NNI not only provides few built-in training service options, but also provides a TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainingService, users just need to inherit the parent class and complete their own child class if they want to implement customized TrainingService. The abstract function in TrainingService is shown below: -``` + +```javascript abstract class TrainingService { public abstract listTrialJobs(): Promise; public abstract getTrialJob(trialJobId: string): Promise; @@ -33,5 +34,6 @@ abstract class TrainingService { public abstract run(): Promise; } ``` + The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions. For more information about how to write your own TrainingService, please [refer](https://github.com/microsoft/nni/blob/master/docs/en_US/TrainingService/HowToImplementTrainingService.md). diff --git a/docs/en_US/Tuner/HyperbandAdvisor.md b/docs/en_US/Tuner/HyperbandAdvisor.md index a367b06b13..b7787af199 100644 --- a/docs/en_US/Tuner/HyperbandAdvisor.md +++ b/docs/en_US/Tuner/HyperbandAdvisor.md @@ -5,7 +5,7 @@ Hyperband on NNI [Hyperband][1] is a popular automl algorithm. The basic idea of Hyperband is that it creates several buckets, each bucket has `n` randomly generated hyperparameter configurations, each configuration uses `r` resource (e.g., epoch number, batch number). After the `n` configurations is finished, it chooses top `n/eta` configurations and runs them using increased `r*eta` resource. At last, it chooses the best configuration it has found so far. ## 2. Implementation with fully parallelism -Frist, this is an example of how to write an automl algorithm based on MsgDispatcherBase, rather than Tuner and Assessor. Hyperband is implemented in this way because it integrates the functions of both Tuner and Assessor, thus, we call it advisor. +First, this is an example of how to write an automl algorithm based on MsgDispatcherBase, rather than Tuner and Assessor. Hyperband is implemented in this way because it integrates the functions of both Tuner and Assessor, thus, we call it advisor. Second, this implementation fully leverages Hyperband's internal parallelism. More specifically, the next bucket is not started strictly after the current bucket, instead, it starts when there is available resource. diff --git a/docs/en_US/Tutorial/InstallationLinux.md b/docs/en_US/Tutorial/InstallationLinux.md index f5a562fe8c..1bce0137c1 100644 --- a/docs/en_US/Tutorial/InstallationLinux.md +++ b/docs/en_US/Tutorial/InstallationLinux.md @@ -1,58 +1,110 @@ -# Installation on Linux & Mac +# Install on Linux & Mac ## Installation -Installation on Linux and Mac follow the same instruction below. +Installation on Linux and macOS follow the same instruction below. -### __Install NNI through pip__ +### Install NNI through pip - Prerequisite: `python >= 3.5` + Prerequisite: `python 64-bit >= 3.5` ```bash python3 -m pip install --upgrade nni ``` -### __Install NNI through source code__ +### Install NNI through source code - Prerequisite: `python >=3.5`, `git`, `wget` + If you are interested on special or latest code version, you can install NNI through source code. + + Prerequisites: `python 64-bit >=3.5`, `git`, `wget` ```bash - git clone -b v0.8 https://github.com/Microsoft/nni.git + git clone -b v1.3 https://github.com/Microsoft/nni.git cd nni ./install.sh ``` -### __Install NNI in docker image__ +### Use NNI in a docker image You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`. +## Verify installation -## System requirements +The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it. + +* Download the examples via clone the source code. + + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` -Below are the minimum system requirements for NNI on Linux. Due to potential programming changes, the minimum system requirements for NNI may change over time. - -||Minimum Requirements|Recommended Specifications| -|---|---|---| -|**Operating System**|Ubuntu 16.04 or above|Ubuntu 16.04 or above| -|**CPU**|Intel® Core™ i3 or AMD Phenom™ X3 8650|Intel® Core™ i5 or AMD Phenom™ II X3 or better| -|**GPU**|NVIDIA® GeForce® GTX 460|NVIDIA® GeForce® GTX 660 or better| -|**Memory**|4 GB RAM|6 GB RAM| -|**Storage**|30 GB available hare drive space| -|**Internet**|Boardband internet connection| -|**Resolution**|1024 x 768 minimum display resolution| - -Below are the minimum system requirements for NNI on macOS. Due to potential programming changes, the minimum system requirements for NNI may change over time. - -||Minimum Requirements|Recommended Specifications| -|---|---|---| -|**Operating System**|macOS 10.14.1 (latest version)|macOS 10.14.1 (latest version)| -|**CPU**|Intel® Core™ i5-760 or better|Intel® Core™ i7-4770 or better| -|**GPU**|NVIDIA® GeForce® GT 750M or AMD Radeon™ R9 M290 or better|AMD Radeon™ R9 M395X or better| -|**Memory**|4 GB RAM|8 GB RAM| -|**Storage**|70GB available space 7200 RPM HDD|70GB available space SSD| -|**Internet**|Boardband internet connection| -|**Resolution**|1024 x 768 minimum display resolution| +* Run the MNIST example. + + ```bash + nnictl create --config nni/examples/trials/mnist-tfv1/config.yml + ``` + +* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`. + +```text +INFO: Starting restful server... +INFO: Successfully started Restful server! +INFO: Setting local config... +INFO: Successfully set local config! +INFO: Starting experiment... +INFO: Successfully started experiment! +----------------------------------------------------------------------- +The experiment id is egchD4qy +The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 +----------------------------------------------------------------------- + +You can use these commands to get more information about the experiment +----------------------------------------------------------------------- + commands description +1. nnictl experiment show show the information of experiments +2. nnictl trial ls list all of trial jobs +3. nnictl top monitor the status of running experiments +4. nnictl log stderr show stderr log content +5. nnictl log stdout show stdout log content +6. nnictl stop stop an experiment +7. nnictl trial kill kill a trial job by id +8. nnictl --help get help information about nnictl +----------------------------------------------------------------------- +``` + +* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages. + +![overview](../../img/webui_overview_page.png) + +![detail](../../img/webui_trialdetail_page.png) + +## System requirements +Due to potential programming changes, the minimum system requirements of NNI may change over time. + +### Linux + +| | Recommended | Minimum | +| -------------------- | ---------------------------------------------- | -------------------------------------- | +| **Operating System** | Ubuntu 16.04 or above | +| **CPU** | Intel® Core™ i5 or AMD Phenom™ II X3 or better | Intel® Core™ i3 or AMD Phenom™ X3 8650 | +| **GPU** | NVIDIA® GeForce® GTX 660 or better | NVIDIA® GeForce® GTX 460 | +| **Memory** | 6 GB RAM | 4 GB RAM | +| **Storage** | 30 GB available hare drive space | +| **Internet** | Boardband internet connection | +| **Resolution** | 1024 x 768 minimum display resolution | + +### macOS + +| | Recommended | Minimum | +| -------------------- | ------------------------------------- | --------------------------------------------------------- | +| **Operating System** | macOS 10.14.1 or above | +| **CPU** | Intel® Core™ i7-4770 or better | Intel® Core™ i5-760 or better | +| **GPU** | AMD Radeon™ R9 M395X or better | NVIDIA® GeForce® GT 750M or AMD Radeon™ R9 M290 or better | +| **Memory** | 8 GB RAM | 4 GB RAM | +| **Storage** | 70GB available space SSD | 70GB available space 7200 RPM HDD | +| **Internet** | Boardband internet connection | +| **Resolution** | 1024 x 768 minimum display resolution | ## Further reading diff --git a/docs/en_US/Tutorial/InstallationWin.md b/docs/en_US/Tutorial/InstallationWin.md index 2531f5b3ad..36f87eed8d 100644 --- a/docs/en_US/Tutorial/InstallationWin.md +++ b/docs/en_US/Tutorial/InstallationWin.md @@ -1,51 +1,94 @@ -# Installation on Windows +# Install on Windows ## Installation -Anaconda or Miniconda is highly recommended. +Anaconda or Miniconda is highly recommended to manage multiple Python environments. -### __Install NNI through pip__ +### Install NNI through pip - Prerequisite: `python(64-bit) >= 3.5` + Prerequisites: `python 64-bit >= 3.5` ```bash python -m pip install --upgrade nni ``` -### __Install NNI through source code__ +### Install NNI through source code - Prerequisite: `python >=3.5`, `git`, `PowerShell`. + If you are interested on special or latest code version, you can install NNI through source code. + + Prerequisites: `python 64-bit >=3.5`, `git`, `PowerShell`. ```bash - git clone -b v0.8 https://github.com/Microsoft/nni.git + git clone -b v1.3 https://github.com/Microsoft/nni.git cd nni powershell -ExecutionPolicy Bypass -file install.ps1 ``` -## System requirements +## Verify installation -Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time. +The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it. -||Minimum Requirements|Recommended Specifications| -|---|---|---| -|**Operating System**|Windows 10|Windows 10| -|**CPU**|Intel® Core™ i3 or AMD Phenom™ X3 8650|Intel® Core™ i5 or AMD Phenom™ II X3 or better| -|**GPU**|NVIDIA® GeForce® GTX 460|NVIDIA® GeForce® GTX 660 or better| -|**Memory**|4 GB RAM|6 GB RAM| -|**Storage**|30 GB available hare drive space| -|**Internet**|Boardband internet connection| -|**Resolution**|1024 x 768 minimum display resolution| +* Download the examples via clone the source code. + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` -## Run NNI examples on Windows +* Run the MNIST example. -When installation is done, use the **config_windows.yml** configuration to start an experiment for validation. + ```bash + nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml + ``` -```bash -nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml + Note: for other examples you need to change trial command `python3` to `python` in each example YAML, if python3 is called through `python` on your machine. + +* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`. + +```text +INFO: Starting restful server... +INFO: Successfully started Restful server! +INFO: Setting local config... +INFO: Successfully set local config! +INFO: Starting experiment... +INFO: Successfully started experiment! +----------------------------------------------------------------------- +The experiment id is egchD4qy +The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 +----------------------------------------------------------------------- + +You can use these commands to get more information about the experiment +----------------------------------------------------------------------- + commands description +1. nnictl experiment show show the information of experiments +2. nnictl trial ls list all of trial jobs +3. nnictl top monitor the status of running experiments +4. nnictl log stderr show stderr log content +5. nnictl log stdout show stdout log content +6. nnictl stop stop an experiment +7. nnictl trial kill kill a trial job by id +8. nnictl --help get help information about nnictl +----------------------------------------------------------------------- ``` -For other examples you need to change trial command `python3` into `python` in each example YAML. +* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages. + +![overview](../../img/webui_overview_page.png) + +![detail](../../img/webui_trialdetail_page.png) + +## System requirements + +Below are the minimum system requirements for NNI on Windows, Windows 10.1809 is well tested and recommend. Due to potential programming changes, the minimum system requirements for NNI may change over time. + +| | Recommended | Minimum | +| -------------------- | ---------------------------------------------- | -------------------------------------- | +| **Operating System** | Windows 10 1809 or above | +| **CPU** | Intel® Core™ i5 or AMD Phenom™ II X3 or better | Intel® Core™ i3 or AMD Phenom™ X3 8650 | +| **GPU** | NVIDIA® GeForce® GTX 660 or better | NVIDIA® GeForce® GTX 460 | +| **Memory** | 6 GB RAM | 4 GB RAM | +| **Storage** | 30 GB available hare drive space | +| **Internet** | Boardband internet connection | +| **Resolution** | 1024 x 768 minimum display resolution | ## FAQ diff --git a/docs/en_US/Tutorial/Nnictl.md b/docs/en_US/Tutorial/Nnictl.md index b58d4c4a37..b0a8b33513 100644 --- a/docs/en_US/Tutorial/Nnictl.md +++ b/docs/en_US/Tutorial/Nnictl.md @@ -49,7 +49,7 @@ nnictl support commands: |--config, -c| True| |YAML configure file of the experiment| |--port, -p|False| |the port of restful server| |--debug, -d|False||set debug mode| - |--watch, -w|False||set watch mode| + |--foreground, -f|False||set foreground mode, print log content to terminal| * Examples @@ -98,7 +98,7 @@ Debug mode will disable version check function in Trialkeeper. |id| True| |The id of the experiment you want to resume| |--port, -p| False| |Rest port of the experiment you want to resume| |--debug, -d|False||set debug mode| - |--watch, -w|False||set watch mode| + |--foreground, -f|False||set foreground mode, print log content to terminal| * Example diff --git a/docs/en_US/Tutorial/QuickStart.md b/docs/en_US/Tutorial/QuickStart.md index c460638358..9990d234d1 100644 --- a/docs/en_US/Tutorial/QuickStart.md +++ b/docs/en_US/Tutorial/QuickStart.md @@ -2,14 +2,15 @@ ## Installation -We support Linux MacOS and Windows in current stage, Ubuntu 16.04 or higher, MacOS 10.14.1 and Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`. -#### Linux and MacOS +We support Linux macOS and Windows in current stage, Ubuntu 16.04 or higher, macOS 10.14.1 and Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`. + +**Linux and macOS** ```bash python3 -m pip install --upgrade nni ``` -#### Windows +**Windows** ```bash python -m pip install --upgrade nni @@ -17,7 +18,7 @@ We support Linux MacOS and Windows in current stage, Ubuntu 16.04 or higher, Mac Note: -* For Linux and MacOS `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges. +* For Linux and macOS `--user` can be added if you want to install NNI in your home directory, which does not require any special privileges. * If there is any error like `Segmentation fault`, please refer to [FAQ](FAQ.md) * For the `system requirements` of NNI, please refer to [Install NNI on Linux&Mac](InstallationLinux.md) or [Windows](InstallationWin.md) @@ -53,7 +54,7 @@ The above code can only try one set of parameters at a time, if we want to tune NNI is born for helping user do the tuning jobs, the NNI working process is presented below: -``` +```text input: search space, trial code, config file output: one optimal hyperparameter configuration @@ -68,7 +69,7 @@ output: one optimal hyperparameter configuration If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes base on your code: -**Three things required to do when using NNI** +**Three steps to start an experiment** **Step 1**: Give a `Search Space` file in JSON, includes the `name` and the `distribution` (discrete valued or continuous valued) of all the hyperparameters you need to search. @@ -138,22 +139,25 @@ Note, **for Windows, you need to change trial command `python3` to `python`** All the codes above are already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1). -#### Linux and MacOS +**Linux and macOS** + Run the **config.yml** file from your command line to start MNIST experiment. ```bash nnictl create --config nni/examples/trials/mnist-tfv1/config.yml ``` -#### Windows + +**Windows** + Run the **config_windows.yml** file from your command line to start MNIST experiment. -**Note**, if you're using NNI on Windows, it needs to change `python3` to `python` in the config.yml file, or use the config_windows.yml file to start the experiment. +Note, if you're using NNI on Windows, it needs to change `python3` to `python` in the config.yml file, or use the config_windows.yml file to start the experiment. ```bash nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml ``` -Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl` +Note, `nnictl` is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl` Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expected to get: @@ -195,7 +199,7 @@ The Web UI urls are: [Your IP]:8080 Open the `Web UI url`(In this information is: `[Your IP]:8080`) in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. If you can not open the WebUI link in your terminal, you can refer to [FAQ](FAQ.md). -#### View summary page +### View summary page Click the tab "Overview". @@ -207,7 +211,7 @@ Top 10 trials will be listed in the Overview page, you can browse all the trials ![](../../img/QuickStart2.png) -#### View trials detail page +### View trials detail page Click the tab "Default Metric" to see the point graph of all trials. Hover to see its specific default metric and search space message. diff --git a/docs/en_US/reference.rst b/docs/en_US/reference.rst index df2306eb70..fdbc292f78 100644 --- a/docs/en_US/reference.rst +++ b/docs/en_US/reference.rst @@ -2,6 +2,8 @@ References ================== .. toctree:: + :maxdepth: 2 + nnictl Commands Experiment Configuration Search Space diff --git a/docs/img/pai_data_management_page.jpg b/docs/img/pai_data_management_page.jpg new file mode 100644 index 0000000000000000000000000000000000000000..852c5fd3d5b401497b0079a0d735ff8f318e351c GIT binary patch literal 226121 zcmeFa2Ut|gmNvY}83f6x5l~Q)AX%V6Bne2697RNuB+0Q66qG0+pdc9}gM_BZNEVeG zrO7$dgeEmKe|ye7=lIR@&3FGh&xAWOz_VfR-qqE+RCWDaUPFXXJc0Ypi)}v=ZM8vdo^bCwwuW@l- zzacClDkd%=DSuZ%QAt_l-UA(7J$(a1Bg@BDPpoZh?Ofg5Jv_a zEL<+L3q2>Kz7vyC)=b24OB+G+=vhB8EvNA8)t#SB`>kdF{~8wbZ)(|}4g0HI;{YiR z7Us*tp#;Fd!3lR}(VNULZs6j$kimrxTGK%}`73`LbAo)~s*J5J~wSz zdWJ;1)2KR}10YRImu~6_l)Rxeq^%v5SrmuAwB%mmLZA1 zX@1M;WM#T1Bab_IQy*hI%uyTt2Zhnc+X80?c8uKEPEZp_uQz17LTS7NV;`Chf8^Pu z6EzOIU`)p%=5xA1cGVbzvWqb&PckwCuc351PY)B(C}ICUKO`g!_)n>Oc#_}tagFnC z9R=@)TbC#FtZZySJNxGV^7}a;D9gC7rC@mFU+W%b4xZ=0)wxb|y#_i6xhk?W3fVQ| zg{-%o11kb5eyYpIHz6eG+N=Sal3Sfq98ClBp9b9g$R0e1`}(@~w1NRhE9F2I5FwK9 z*Su0Kw5v^7M3FW$zatexN#LR%_mv=7J2Ui?{It)3H`^~D>zJon(HdA8^(hVZb07$T zlB_!I*CDU<*hZP%>Y)RN2oAncl%OGK|M56j3(@webLnd$e;2Dubua_YPF9i87D>S_ zBxX%mpGd6r6|xPb|60U&y>JzO?$cLVZk0{OH_ilVhwFxD=9-~;j`@t)+v}?WZm7Ka zZ`ZF%Jf8Ugna{S&y(5@p?<*)>60MpUwp8z1WM*SV(a=~nH|vqvc&ow{s}9k9ZKFiE zWW%je|NSbMY~LTESUD^J^;bOEniGjB!TT)!NCJm#U&?jW^u>wSfFF(fL)c{#UeRZH z2~WO#8lP0BI(*=ip0zFF_;Hb@?Ud^rh{5hf*JAW8BdPbO<{V&U1f9L0J7#n+%$b^g zRhs2rdc*|23|>fGYudt$TUoTFUR>zAk#y*W%g4H)*J-;N?&gP<=RkkB$##eopX*!` z@#=uR{;1|^7j~zErTw*bcQdgsE?v^07Gd{JPQ!f#(pG%@X%=d$lHCziC!G%lO1GZ$ ze|)FlYJ1zU!~07IfvmI38zsL{g&o7-Q<)ko3B6c$ic>;u$FFm$#w_hLODE<*6k&c6 zmacOwcMd5WgQiBNWMf*MD-H&c8$_wdRBsw2xq8Lj+Yy-#+(2GuWMDAj>6GyGa;c6# zLk3{6T$Vt* z1l}UqmUQ@{@=Wom_dwg-7o=8i<%INb87l3V_-6bb zI=8Ujq&MW0(qh)W%X)BY+eqSpMUX8H+X8bSuzYGCKb+)5>!(tS7)Iy1Sbl`xVH9Wc z51IXxz3YRNsVBdkWU75<>&F_+%VIB8wNa!xShHcTew}ubJ`%z7VYV8{Qf7DpYMm5l zO{(-faUVXR!4($Z7Hm|MphrW4cj(YcX?W+r=rnpLDrrQ^%%VRvQQ=vt%#KKBsGChf zGtQb=`76HX4C5MloqjGyF$M@f9$3?J>mAzSylFdfp{;AjnkW2nt7zFA-;G;+E2TG| zBLzZJ`3LvA+=;d;Sybxt8vEc$bTjUoT`V^VrG;*2L`r#CYk>Dp!29@_Yh6J`WBau{ zL6FdM01;=*xBc3kX}RvbCy5s)3C$;I8aaZPzPOQmFWPm@ekrR0uRFWDs&!r*dk(4c zv01^9VS=mN{i{1jqn8w=1yeqSmTz+5%dLrIko%2hOHr| zJs4iWFQly>mDz7#j(Cr$&f&M};=XZa1sOAZHhGL>!MpFLkr~Nrr=%6S@v_;U)bwa>6_j@MW8E;UFH=(cGK8 zPuSmOGUV73E7HXHprFc70qNnoq#<3FTLy#3S8BEX6Rw$Oo4I%a>APUYLV++h14qZr z#YUZkKF1G2H{oAcT!M$3Zz)xzY&yt~-}$Pq_oSUh9_6IYsBIVN$!IA4_3N=F1gFQp zpz-~jKb_%+W(x>&d6WmuW3Cl?QZ)71hmFb9=bKmDN}%xXqnTmdLn4#6@72Lzjn$V# zKFb(ax41+RZH_jmJzCfZd;3;*sw18{tog0b8=Z(C3jBmOEn(d0*cZoty$ou?R@dL< zTgoX=Ytv)a6IjWG_9st5*d-%--(2nKovg5C30RSV-v8F&gP-M1lT&;SyqC%)OjQf! zTGOWq-XT}MGUP_&Sap`dPoh%eNmgUBGX-@kwr&v{kI=LYw{z^d_q2lBi+LuVKAk)@ ztQop1x}d%NHHF%UeuJJGUH|`~`dO4I&Q1EJnFf2aFA$md>05hECVJ zaE6#E7G+uRXujLY2wqxyW;?F;-X(uh0bP05Bh#0*8Lcb&dafF3yehn2c)jvW4u2kZ zx=lKo;WQ5?BeUlAHnWmN$&{^=1!pud4&7T+(i8EPmz_=dt2*@$O6Bo@IOdo-Yf(tA_%HH5g zJtGTzwTI4q#5p}pR*9qP{w-~DujQ|SldC5R-N+z{DN~#>x>084`KEiodR`Gv>18?4 zOY#N>qEx=;FP_3RxNtA`5EfO;hrh2pD&_S9d%Dvluv+W2Tg| z=rkArS7tnWlPB{qE|`JE6=NSim`o-nsvwY?$V@8InGFKaB)vpBc}G}Rzm&Xf$?CTI zQx+q3J%h*ekDcUAkhyKw8e!*vLcb$d?cTk2SDv9XEtfb2SbQv@fXBZ6hoe@#Vg0sR z6V=>%WePGDS}xDso;8$pP%tmm6$`Dc?Y3X5FBg3fD=+()0+uSlT8Z`Sr~RoEjy^hp zoGd}!6`k}Afah1wf%W(3M}Z^*iM|sJK4Dz( z)0#Ct^Vyp61zno5y(Y{(H|@EN@7luvDKxB#g`NDj zM{0+7-eW>c1PJp}FP|YWZEn=44g%ViOWE$$v)qRtk+gm)GD{}b%5KZ|9Yz=Tq1)XV zo*i)JONv=6?CG8w&n)cdj>PjO+LL++y=N2+{Z196f5V#|)p%y}gR^cz%%O4Qlsrj- z%3wr!Oj4Y*D3>jGLO*#;JJNjKERigqTo~?QAHw`n+!4I92#PdKo9$lErr&p>S@Ust z%^9xy#7gbpLG{A`_lDVM!qrAvSiFN&m*X{wq$SJ5H<;AaGML0glX zC6+1{Z>eo29^aHM(;m@yRmokiki*2_XkB$YayJ1Rhev>|?|@PKOk?)6mXuC1?FH1g zLDFF8`$!Qg(~opM?JU`I*64)1#$~Q4)OAYrl5uX3jS1WAJ8U7DdOe#XmjV+WxxAxM z-I{WX(3j70LNl%=gX2AF%@S?1Y(K3Y2oe_0n%qF1#I31h56_J{HzZkKK}5}6uP32a z{#<|gONlhm?a&o_vAN9&4ixb>uf6V>L~G})2(n9e8(Uu>yr~Pii>Y%WmLg&LS=Zaq6XN_)g4<8C>K_QZblOIo;u#L)Xy_5cr1woM) z_7+JKj_oxfoMUIgG*HX8X8h8lj$<*u10w~G!sc^MYiLRf;wUc7Nxb1(Jw%Bj(QGs8vPlqQ&W?riS+BwW757q z*K6eNcy#0<{CFX&B^P0#FH8X#txCQdhLWICjCF8GT%LJp9LjN}<;x7`&-GG3TC~zt zbIuAkrYsA+8_iB1E`L8jb|}oW)qSHtq7Rnj@F#(1AP=Rp&tLCdN!X+&N9C zo0J^fV6-YuJqK=WAhL~jBH^+OosRM0pqGA4EZuWXRmAU3f*&{bnqNV z4_lFvn~?^!iOFk{iKM3ARG7@#e57>87gS0xm4mQ;(KJh0R$GPg97p$qlk2|j9w6`q z<5J_JT@c%vQM@CPm()4#^#O+Rf?XX9vg@nzE22oJ?l)f6x}(BQj7=|~;oF$g)zBXNTYH`xK5@N1EA>h<-AwJbAJ<0?tO-K-ANrT`Q)tK^MO{y2Mp&gj_Y}>z zI^2_y-&6OvuGXPDT5d(06`EFfM83PNUo+%ZdU=h}cvAE@E1LJk3^`!6znuh9jjbK~ zQc*wZM>o$WcBf%zT6a+7auw~nCk9b2qjAlhewG7?a{{N)G9;L`H=X6PIN6xt2CEIz z=?%H_G#G0ujK`UZ-nBf7>5)ub%H09IfWVNGblND-n6vi+?-!4a>wDiDGkocs+?TfP zRH8k(>F+SM9B_ljX!6L(!2JodqT%}(L&rTB`m25al_Z{b9H|Vn%GsJ<)}GO`zR)oW zNwX~&Q@8Rb?sgK|baOlhX8hMfKx6_%8u6J&O%Fa*^e8m()2)VRf^XbgHE=Rj4&abn zGvh*Zggb=Rp)}tQx!?V=fX0fRRX^%K2iQka&H?kRb3oQ^)lXuk7~DEaHFZ2VZTEmw z#oG=30Y#xPgzBHyAcC}*e}w-ygKea$%{id;-=`9G2ua)A8$mWLqGeNIyOMWN;rpeM zKUH?!`*)RXpeE&J4XPPCa-Q2Vk zZW42628x^NqwkG`p95x_7@g`ejmqMEW1INk?k!H|I<6s{u;o*^St4YL9oc=jzAsTm z_&4nsM~+g73Bgh7O^#N{lC*p!SI;@)!WHV9_THCXJ`K8EbFyEOkCGS+UmE6vZuKbp zD(fcJxs8wB@A8z;An3}w7R}q|Whx@?%E926T^p&2vUo0j4loWQ7Q_M=`k`IWRGSYkH>cZrNE!xy zNrnr+@@`O*g4Bk6j~O+BJru@bIf-wra6Sazo6^8GN#R!)oo}OiV1&y3bA0nX?*$FZP@%b|0zh^=n~(dr|-$Iuid>aaAlfl<6{(Y&t_@)s5;t|M24|n z7{x;Tb&kiNap>Udw(}R#7v(_#j@1Vx_Om7Ltz510OihdxgfnM4NW+*&0=;_gL#KX> zpT()eDyEydUOt+Ru*P9%w>mO!O+!Q)P8gq+ZpyfIm`<5}cgB}YyVnZ$h4LdkU{l@0 z_$hp%4d{8CX6H`pc@UM+=Q=heCQUbxAV$Jl+2?@Xw7fyBeki0uWK%;2rJsDTMDT5Q zr{V*$7yPEUis@~%R@gU65eZWd^O!1Da-xsLg{Ttm(NS}11txTUK{#~+{ zpT!xFy}KWeIF!nqF57o;60sYpms7jcJa83NCTOH49o3(OkV9HTtX-TAnktV689cpx z1wYfT`E&P6Nt{I%tihV>_8O~ zz)tFixMO`4>)EDquPT%&3Zp+zrNv_zGtsjtPwTY0)u}u)FD}N2(Q(mYmmC#|?}dz8 zieR;99~K)G@u^WQ{XOHL4Dm*L)9Whmu8k*z>;sLw#?86x?KItGVw8$EvqGe03NX{# zQC;C~{z09iw4&k#Rbee0w#W|MC>_$MGatrvVF41kER$|uzVD;8FIp*F#Y}v#;LvNy zv!xs70JRc?_!4#*&^A?*hgRTtdaqhjjB122b0>9@a{@9+^# zcKUd=J6q5@#-OlBd2`Qt%nf9gvZk)xX+DIM7Tu*FDM=H_ZjQ)zXL2-&BBYQ)oTnP%}E{eqb`LAD-{%?PVf4L@aIYNC!k0Q3y zD!NcA(@U0)LNz_U`U-Yv=BP!oEL&9xae2xJQ4NbII|pOgL6PQ5Lk{&jv>pLFi45z9 z%KHVPQ-p}klp%f5lYp+whbvQVp}1}{P{6{gkHy^d)1Yw1IdIuV{@!4r9No}S+TqtDH5paMBs)9=Q+Z}(b)CQ#ax_CR_!^7j=Kem3tcN`bplFS;O z3%3(vd`pWqomjgYRWjyOunfupF!2yFBniEPJBck<7*DXXIr(Q=6OUdKSMAj}r2;?A z`uehm2Ytrg&+dIF6vHWnprJYl{_tFTqiY$o8Fq06Q)MQx#HU#q^z3yFw^hpq92$Eq zp1x7~@FpwPUu6x#2+aFYcsqPtV;p{YTGs667>-5;C7M4)B2R4(}+k1xAZ9O|C~cWClyqbg5LUykjMrH)Ma~@Yv#w0$}B3wl)PJUtOi%spBih*L|ow&m8unsGFyCT<_IxV~P!% ztuu4zlinqD+rHnTaoTW5=MZ?ibT6;qkNA}vRzD_Q75mnm=JAKk(Nq8KbHJ48ct}L# zmvg+IL00Cp|6}c+HV;0&c95owS+z(eytgj=0nOQ$c}gnBQ(ek|S(|B1t&9g`eEgW= z?zCj|ak6aNBEoHfxes6L6(wZWq^C|1FX^F=GxvZNWY+j@og!H|4c$*J zgcB~lGwR&ivg&0)C}O_x&wa)Gj`aj{*B#~`!nEsJ0^PMnLEjK1(aqyMiRS=eP7+C- zR}$gD*Jw8FYxPZy$lqsVk0AS_&syiqF34;tYTkgJ)mZ2pj*vj@)#UlkNf9Jv5Er<8VG`{dMXa zH!1M}8gsP)UN;+75<;nWlZWNoPr8r`Nx-6?1H1g32AL`cZloj86U#seGvs)?@Pm&> z*(#SmE<9Aw9oi2kdom%HXUw@1{Dz1vcteRjEq8S!NjJ&J@DBU)TTm=6hJ{`4bP^pc z1%M_a8`6ncd5qpVZMPl?iNS<3HO>K(-aTrpQUz+=SZYj}i{nBD|568NzUyn4Z2!Q_ zmr|)9)A%`&lv>(nNs!}Nt}}!UO86YuYtcZ>3^35p2>e+nx&otGQkzM3WhP^x;|n2wA{x| zXC0?`yVDpeue-)wZE#V`gf5aXmoZ54DKk4szfr2HWsmq>jkBH48w7R$GA}2}Ce|S3apaJY zBy$Ks20?Z1L5Wzi55cQbiR9bxWS&Ujg#ykfvuo?Si?YqqivFE`2c~&gm49$Zlzp@Z zSl?nj`lG)Tj6l|se=eK4Rk<-;OZH#7C;x5Ny68!aPZF#NnQoS)>M+J|gC(u}u3*^e)I30DHY zF@Arte*eKkm#6gT2(%TwWVfmP6IT8ly@Gy{beZ9gISO@c$w%1DTsUT&L`*EXjH(uJL-9!ft){`w5p(f@9q?I&Jh=--f#YPZvsu zNG!3=W3pAe2On~S7WQyop#SF=YoZkDXPIYr4qOsxz@KB%g?=J^R2@y3ePB4rr1a4QHm8s zCaYRqUkz*{D6>GDU*&kGsz2}Nf{51{A2GOTP}>Qq)gMb1hQG@`;KE8=BV)~2aj(=9 z3x&=WwMvQ6E)8l|K=`+YqtqP6f+v+17=Qc)94fC2H?|e7HdFOViQ1@O<2>zv+J#_% zj%=^MGv+6h=|t#O%m^;?2{K)#H0A(a9-%-T+-Rr9GT#tQ+gm3%Ls9#O45IxpocyLo zMOv~n7sW3mbD;zm-Q&O763~z>rWA}#2*K~rE4ES6f0--Q_1!)un7{yYuAvnJNkGR0 zaLDLY^qnGI-@nd($zSmgrdiM_X_}vb)Pmm^Lhr4kWe8AN$4NG7R@7QLY(Iyny@gj6 z*z=|AahovEWb`h0UXCFfey^48k4q)EBe)K<0O^QxV6yo{asZFs>O2PRWc_hpr7E)0 z01xHRd1!eWeboQQ{cO16I4|~D)jX;Ut#oSh$Nd3^EC*Vw0JzK9*g3#~7WV&J0b9}j zs&Jsmp&Gpb!2pvOW&SLYYy(#aG_RU~c!0lPJh-4gi>OyRWPTR-kv+^0G2v-1+oc<6*Zoacuq|oxa)6 z$(H8EZcjV;cn4@LTpcsf<)-t!mnZzQ$qRbZgXj-3(=%^Xq)pWs1`Vl#wH zmL+7SfnW8yrC{sR7-cMsFaB%#O_@-URxdMUqRrW}4b#Bd-cg#M{_TUzMkA!KaQ+bxQSF!i4qNv(Ps}{vs7jTX-yuQfMs_t|H zTf4QxfytuRRJ-sypL(RbaHcZ(`$A2_?Dk^T?)@!&!R=8&Om;9TGcWy??fkJ$+_<8= z=hoW#xO;uUCnH;lxC!IxAE?fOqA*;glb66K%}yLz<+KU8N4FZL&({K<^%&T!dKo{7%9PC%+iHw)>A#*?wS^ zy^-5z8*U=J5shmTh!y=wz5_f)vy?^zrOfuSY&Uv-XWeefqj}n{taH`qUjN%fr@gDD zc}?qNq85_sLF zFJyAp@ZR^$gf~upg3?{Tr~)7Rj4Z+4{_)wHAP!>gyq{a?F-UQO{K7SAfD*b&1GC5g@`{jYIu zVbWuQ3^*Sy-3r@0)tLK6L9zlKPh|Q?b*XCS%90)j8--kyt4Uf10z6ApCB9&O4ZIy% z0-1|dn_@z^!@Jmx*|}qN)*`eBIi|77_H$~Fa}J|MV)xX~IyOfmWY}yExC|Q*(AZTw zsV;UyjSsKa?qk#=eP@DrTbS?3Xd5G6N&XX!IOiIy+WZx{LtIJ|DF}u2u<$Iq7s{lz zqB^>CTs8ZZ5^9XSwu#Uy0N!fG8h*dBPRu#f+VH+a*TGz2?c;+d$Y-@*=u%}6$PYC) zo2d_<`wE9cb@VK+}) z5?d7pEOMn%n#h@&xf+CHyfAH-{pHWZEfEH6`?z~XqMjp3BMC4)F(Q%r6%XZ5ju3S;^o%W&?frCx~uZPY0njWmDh{P0~aRu0|!mD*=!)Kp2~%-v%6H`mHf z_ljmeWPy_4lmtVq5DCb$Y1$I8q1CO+i}p&(h8T-umA1UafA(xBHvY#5pV0nlZInE8Z-O(WU;?hfpxTn`LR7hZof)Yj<{Dyhd% zf86%<`SOoyy^ycnB1sgJ4H{vZqgYQ2EJLNSJkSD9bwp*v5|-NzjPORnrQ7$gnS1MVS9kU4q}QoA=N0q%gTe<|-pv9jBS7fq)jqC+k4=L(KA@*4Lc+ z1c=n+P4Pd^z5*RwuKiI(?@+AcC>^*AU`-rQW91q?TdykfiX|h;dN--=I7&Q2_FQKs z$n^u7=C#v2iz1m`&o^_#!npQS*P`)#{#ZBbHJYlk0SIIF4t@@6;*P#r%3OlAgV5bD zdVK+o#93KPPA>w^K7#ZSGuWIoN9!lfap=BB-olFsBEG0g;ltzY>01Wha!|~ zptvNPQr?YU+ik*!0DkCtG^+0JnQOZOOM2N}T%JExVIrB%ut9$^zn@N=#PxdLoU&s( z+m6!Y7~(H`;~P>Bcz&b{-@MbSR^eN>{;qg)a9f4P0lOMyUa=0La)3rMJx%Z!aiSSs zkUw-41mD14S#bZ-bEBj#fAzu9u3V|9JZpTWadxIelVYjX_Ho%VDdpgSh zy0dM@)y5lh7>TY$UNy*isiD3#f$d8StlD!K?}&!ae(6U#+~ILJ2i)%)Cag&2}ZJ+-jT3#bFq%{4hbFDy&IjK!;kXeDrlq!>BgsC!v z=7Bgd)~Ty$eVLM!0BL_wOl`cI#hcGaT%QLkS?&+lU3;Bm$h`EHRzJR)9?t?L(Q`3T zob6|$p`$itECZ(^76;Ndh9AC4`Tk5*0?D~+uN_7k*4&uO1a1XUn6CNcoQ>5~`zu|$ z8#DU!@tbGwH09=p4nNwMllY0Huc_uKCW?6qR;`S0$+p~D0|4vwq)nR)+fUM-#GN~0 zp}eQMSyk`-Id?wylo;V-l}ozg2E>;sT#K6!cvO;NLHj;ud=+u4Qnas#+a#!s`*Y9j z9fMRxgkt=xuj;z7esy)o(UMiI)k7~1X6||1QB?)8LI)&S6fx)pD^0K#N>jBX++(|3 z_&5~K!Sw1RX&iAFv_DN|mlS^%0pq)ha*VlZ#oZ(*u1WpSby}pqUsn3QuPnkbWwzV+ z!BEjstv8fzTIl;%-TQOX{M4*PK5>c}Gn-)T9joY9OG&FvyhFS;5t%~xK0!5Tj;UHk zZX@F_w~yZiTt`5o!z;4~w->9UsNbqFPQDeD<*1f=W;fjNGS_THHOowv|7pV2@TU#R zT7l05gFCgt1~Q`($YD~O0VGEjPo7*}3s8}`>7U`@FS7+P+UDhTsQgCA`*ePHb4HHY zwmPLik~Gz>o!Xng_~*oid2%|)ZPkgh{!>lk0Ke7k9Dhj&ud=@(;SG((8&_tGd0t}; zL#6fgBbC`n;2+v!zFei|cIp%v>c7*+nx#d>JtE$(^&456HSj^+-&E_Tie+Gs9$(lg zj_A-Y?9P6fI0<6~U@QQ+rx{|xr!kA{EFo7kxyV>iPhz&!@5R2Gl+c2%0L8jb{OYJ% zr18e?q;0jl`_r4TA=-*cPmnaZ#YggTJIp)11*_{W!uC*R<(1DqR?3C7j^Q6yY4JNX zD#FIcmao%yY#=$2ToZi+w-Hd?02W(gSq6PBoW_HK#!GX19iN@Sm5(twB?5`p_Z#@X zDGX)~?jQLj8^{r{&W&~TYOAc-CisY^Jv{7na4IX!7Mft$@GPU}itA*3mrj9sU$W0R z0Mj_jHw(wkLrmPG#-k0iWVWzF05+A>o`U1@_&5oK{T zj0BaQjJ;eX7zWH>W*^rY5RP#5w>vIQ?AO69o>hZK%(}Pv6wT=<$ZRTtmyF_E3Iwub zJ@CN#%ZQTlB~&XKt8X*?4K(GID>Y?7*LB`!V=Nj{NX(iWdsRDYt&y#+LNrmbp)=`I z`E1rJ|30Nh-dyXl_^`Y+{YGIZVlg52ERnKOOh_%JP*tEi@^u}XPh7+Zsi^bnIY8%@ zY&GJ9lo%PY6s&tE@havsJs?mb6F*@eK;Q4D7*kSg!P52WG%~iRNl_b<`OqQp#Xg_p zT+dwZ{U{>d{#fjYRvV{=m=A(7xz_@3x&h!>JFntTLTpgr9Qdq$+hBgKGn8;%)DuAM zmt;;WmqCa5k>U^G-X3`sT@6u)dI8)S%P}p{I7^p8W4qYq+JheqSK@rh3MLhA1s{cB zn5tw_n(G1FgC+F*nIc>%@zLLRJ~0{iMCf92nvSo}vxei6mtJl7uzua_POK1hn%3&p zbt!ZI86x)w1YyNVxAUB-^kDB*BOW_=b#kPc8%4K@zOf77g$?aT@@=@W2=&iE1t_C8 zsU*6r2tGV?m$t#_=Pt2@o&%KlY+gbefnH!I*fNICTC9EzrP&y!*EDc-&`_DUT!Z;# zXX%&wU`Al+lowLvXFbQ_-e@!P#&)E$|3PBsaHR20RGGh(OV(-voy!lE{E2r)JAL4{ zKretW*k>s27EmthcSxPq2=02lU~Ay?KSFB$m4k9 z-wt>;h3~fyij?y8Z(Hk~Uo;J`$p59ws}OpNH5Wf8K+=XX|B?H?G#47G9x@FpP;(-7-xb|y+F4C%8{FS?G-p0a8tH5&$(rbT zcI}c#gsX6W{ILP8#OKxL19h91TP4Lp*}e8tWOy*~UjGooizB!hwhoMgc+^wo0Hj0v8Y^D`4k2J-f@6iAu4J0niK7AY=kfm z^^<=Fll+)Sy_eVdjv*#`D4*ff2Iq3K3gM1*SeiPqaFz?bhQJm>6?)H6d_7{WE~(hk z!uF$Kr>&&}OGCXS3A26RjqZOc8a9m?3N3dCMs_vD6#jCaj@^ltJD5gxLEd0q{_Ff7 z$Cdnn;rM&_67KxWmyhqLNV~LyH}cPxa$LrW6%0`~Al`Yu{i!y%xV(_Yg);o?5dWEd z;hyAhbb0P!QZgqG;{ug;r*H`^`){fBHx*d^lVQCy@(P$N@OME7fiuG4la9fyseul0 z&hmQRnl6Ltg*aIzVO@$%jT7fa3?>MC<^zU;S6fi%Q%q{g73?!i=HdtF4+2fwy_jTb z(Z3YKcn>wvL+3y)$u?>a6Q$4krCgvuzs|4K{8}!ZWdGvhE`<4qN?oYRMZdeSDj3@u zC@}mB;3@rAlbKiIjw4dk&lkC3eJ4xM<;zJQ-nYLnP&%}BdL5Gnms&({DZk9+{)hQ2 zFL*+~k=y>_3H=PNvE*ez0nsq-Le8I{feU`yKk6s{_bC5&eC-!{d!e`gl)Tb^SH0!= z(6Cb#@8HgtldWG{JfuV={}KK^45VnW8@!QPnbm6oQO^;-5h?B|Gj-Su#xcsq42`lKeoz zoBebW+CPYtKOt2*)7ua^Jv+mnn|A!Wh~ifcWg$&`wh>$uJFqY?^ z#7h6#L3BVmW)T(?{b?N1TLe1X&^ln!I7SAe&%%Jen3hWLV({`B1?q?!MV5Pdg4t6s zpdDoKlc;F`-X2TGxv20$h8Jq{lf!(`Q!b3dg+06&Q~n1Hqw?O?c2#1eRed3pWw?q$ z;^`y>f9?HNIEH~Q$iUGx0=qPieh19_0i5|q{^z)3g2p&!{iofg1^VVY2=FB6``{f* z%nCOH53}f{E&eSnlqMWf2sTRk8$f5Vk;8Oi?1rq8D2xlCc6c?-KLJ|ZJL*5T{kfgsdoXtlCmna>G&$n_K6f{o3}@P zv-cqZY~28jXDj887?@WZDRE@x{fM7p;2n`=Pp&K>U9R99K!u=9EcF^R>Mtn~pqgr{ zex2z~?u_!js=T^_2l!JeJTv3&%S@IlE01fVS~+njq4j>zA{vi9bKQ}kX}^?nhp#lm zkN?y3k$k(pww*2aRFEmEvt&_ICOwwy=i4>=w$mQ;xuI483Kz%No2=Q`c4vUA%=Of{ zt+m$HDpx5gyQ@jwS>iS`YEI+6m*3PJ7ALEcsMD#^*18wD$YHa@U;sY1(;iGDwGJ6? zG*PksA|$sWax{3~ zNJ?LeQnm*@tvoh+1P=XfMGR)iMpI)A{>IM!2dVdegh>C{{{K5t-XEBJiVWj^0?ev( zXpE#N0MA&7cD^1ps^1|J-m`FjJZpAY?ERZ(w^Fh)aP<|6>|AWdMV4~JzkBpm=D0?V zv4C0Q7iS@}lt(LqyN2t^0m*46+vcY}P#K}MX9~05)EVJR)k9k}qU1R`!I;iTlf5;Q zG`YIRSRSWgRr;mDeuv8Y6w7rkw=er{lzI$28+8~FXi(K(iIduy<1Palj;cxjvG!y{ z^?@tvIdDxrwkg_W#h(X9zdCvL9B5T=kiHUYPxwgosi3k6p3hBFt|r9UoZvNle>J_j_aR$l4-}6h&biqvmD<3AGmqE#3ZzpJwbe5$JhnwK zc2$1wi*wE79xJ*fPR|x%K59|y=o?AXN(N~^u?|M2JTM)sMETY!w~lAuEf?ReBiG1v zF_s}3b*xvJ<>52h6}L?FM@&E5*l)EVzwW@`du{N zA-CdHlE}l&&~Y)UHn)B3lwy1IEeE}|Z>^0?AqEa(*X!yNiLS7L-dg!sGvMxLW%_px zmvlOqD3nIHGoWvn9+_r9X>YbJ@jf^H&h-e!B}(fkSDW56%*63 zKClAOb6RqheUXr$d7TUCYPAC+F?$?U4!qnE5Nm2Bd zG%sKjI1OSu2cAP#6fx15-}>Ma`|-`vP`7EAY3Hr>rEG&2*gq_v4@Gg~+z-S0(v1N# z7#eNA;`NiZWgbm+tuI|{#idM4utu-tt=aV?LeBDY<-HG73pH+DUYP7${8~T1WH$8s zo`7LIqMgQ7O~XhQQO>vOxo@c|u%Ix~$ol0y>_Fi>j0^QI?$v*f^BWC^r@>p0^*9KJ z-Y@5}c+sc9n@#Jn=fD-r>%Uz5+dV9Qj>W^{e~r+>*M!2j3ocXS zzBz^Mjn)iWtswBdktpFXktksu1$yEvAb=WcsfoJCdLV!SK4$w!p%Ugbmu!8y2Otl6 zZW)S>V|4Cv%k<;E%6E>?#L64&D9Sx~hvWCz77^qsw+qJ%b^EgNtSo{EzhE|*UP;n@ z_14HXK@aWBZhm9(ftPd?0XPzH2H~6o3z~J99n?B7E!3+irC1+fB|plU6CGvQ%t~1x zMjPK;JR}FWVm8hCEkBN}zIo?{;;~nL52J>?w$%1_CNPRRR2)EF*H+yW?OD5vDwZN% z40wT2^{z%vb7S>+0WveYF5y(>*sF<-gOv-h&j|b+zO093&2QSboP@q)oNh^ajL-R= zac+$SeAROOWtXN3*D>P(JelP7H}mpIBRT0tU^gd1$YsMTk32M6B=D01|Cb*))yBzg zyL;;}`flI>0&@DgR_x5X9W%;?!9U8A>LIdkD#j=#v<-;Hl32r|Z=__%H3})T#V$?5bo1`PU^;;gDWZ z6e8SQ)s24nR0A`j&ob;e%Cs+dvp!ke6FqZ@K+_1Rx2QH9Q>Qi1!O>Wei1#!BOsg+- zu)D82xh~yA$M^U-UUAsRm*TGROS@0oK-W4>QyQ|b>VK_AB|C@=ZuE$R<-5T`Ai;xG zEw8v%N$ar6=}&7eb&ACetfJ#9lR^~pQzeeOOQv061HIpK=G(MkJGn?>4X(=_VkOr$_-?IODP(IKnD zmVDbrqY*azQ#vz ztj`@}By!5_Mj5jdn6eXZH7-5bjnbFtjPTWK#B7JiMV5yF#=y{n_tuL7KsHHEuN{!z zu1HZeQ^YY7ep|$Y!#;{#o|ahGvp{&H5k!34xzp|D^ihK^i?`8$=?+44Ih|#-m~mGP zWPUp!RQ~UIq<;#p{f|r12koPa20D$BLmJ=qHZW zRHSZ1EsRpkr$Dr85DOkYJ#0CVb>B8UG^h$TO*$&3FR*BA z^jT;ZmcEZuBh4e!+kV);mQW?bhBzIUt9M8k^v~gnoMsrmcQ&|ckSZQ@h`t<6u7O`t zd#_%om1X{Od4aOC$80rTD8oS^T_f2J-f4oo{!Yt;9NCa-wvt+ybWbiDb`iM^c4#)V{vLTdhx13f5Y3CIx{C3Yp? zA<$TTqSx`9-S|{z{!=US)M>H!pv0l#Ak<^@Jrj3wbdh^R70)YL5D?Nreg<8sD>jt6borB>gM*9-KzgH$a zlqc^h_vEH`hf3(<;r5IlsnE}v_5NwicOzPov$nR;OXc?~y+1ENu5H0 zI_Qz&AgL&gy>nVn9WwG?&wlLhI065;{D438C;!*#PwEQ+mYytO!t4{T ziIvm(+pody5Zrph{$6CEPyw0VLD05dS)dn^(K6bVE$8m;_R9pODD?Iq-L>cKAg9*b zE$yPFjMEQ% z1v!N@n}8REcO%wS<^tl~atisNArw@z+e|co*-<2dDrG^Q$UsP4V?lk_WeD2uIpe|% z@S9}%;o&4@xo72GIKAe zZB4hLfXa=Gua3y2gLlGE#`m(ZKyJj-4oN zYYI8Drq7CQ`|qL5%ByZ*RpkI3undjaQJtBO8a{x7^v^)=%@Ux03VJUa5ykmup!cFv z#Yg`HW509v=g0k92Em_$`43>>e}>9GBAfgfD*tzcO4z0%hxP#W)IyLm0|!1yREf2c z=yR?CMp9~1Rl!OYZ8Ffq#brcvo z)nQ0vGncJF#kdLRH-Gxt>2S_Mp|HMhz+naS3oeHruj7(n7oQ`3`US>_`*|#d!+lCh zXSOc1T^c)_ICk(KEKm_VWHjNkiRm8WL@M}KKj|M;v~j_6+!(pEG9A)(&fdTl3hDMK^8yzkL{P;$0*7cW3h) zGm&SeY2ZBdl9>CTiz8)I`1eb26LY}ux}VpdxJL#fOza`+?A=v(q?6-{|9Wb16LZ1P zy2g!ch#kq!wb%XIJk=S(^7O=U^<01ft>r;h4_-mjI$)yuhD{s{GolA2i2F-EUGd}k zhpn^X0Z-lPql2$@2dk+o*st9_@^EV|Ic#f-9wbJ!2#%7v^zRP6GTE1^;9QVq<=;@Y)qj&G_M@b|juND=Vzulw)t);!_9)q=-OGx!2#+&>O9dUm5=&SEkxMN zsHl`b~Hwfb{hw7wy>i4-E^D=CSUG!u85*ug6ha6V|BUPdW>lyl#C&d z13|r2IeeLMvr$2*tRbPemkmw?56k(eVuHFUt}TvAcfy4GqLT4-()f-JI`b|A+>0!U zy!V5H7O-<|k&p%-gVr`TdUx!FIKl)WE_8uD%vSfb^;I>A^3MZVus2hSKbaz8qV1*2 zOtK>3fZ7GhZF}Q&n#SrZV>;obog_jsz>0Ioa-SZWAd)&pm72}BjzZd+r+q7bR9v3J z;ikXs^2}&It0H3XP^?GfEAGyz(iPIC0hXTGr37cMv5nW_OK}?&tMEh>e^$QIxR&%| z$}QisP4eT?pfJ3|6MVbhLdXQaXYSA2|JWG(dsO{@aw(3~>4twVuFw1n^aDUh4NIsa z2G+EF!y71FSRG>qM4se+3w;klag;$u6bS>uebJMjoA(%ePqE|>hUlw@kNlCyyf)LX zH$}0VBdXCwl&e4M905{xCa&~B#Fb7YH73LSW%EPh@a7Of8H^xE$!c}djZ(ca6mDkWAq-NDj9`(Aoz>AJnbYJ{)NQPIPuOmnf!;#>+!qzJ z<-CYD4=sC~SZ3+Cjn5;RY;23?woYMMy1unAF+TZ}F5ql7LPIuP6OpvQl-rX=kC93u zF@+qB3iYsn*VlnIxP0;YAw|%rY%31y>o1nnNu2SG)0HnHmM!h7 zdQW$^e1=W^B{5i4{7V~Hm+sXHmY=S`5Q(2(Gix&4CVu8;B-4MHZAF&ht6f&MsH1?; zX~!$)Z-DReCE5wFr#$d>8x;P6G`y>QkZtA$)mH!LRH%!C&G9srAR1eWdH!-cELZ~Z zeg<|PkyG~`>d_e%6x$`E8q`Dcp@)~^bikYop1&eOU_J9*`)86~UJ{i7^Yn|suN7<1 z4HS^`lDb8JF${K@Tqb5$FmKIBn(}TEU->FhlZVP4Wazx&$LOY^ix6;Y)*A{+GT=U- zsZ}300n}p&w%COo{1y83(#p$r5)wDMrH}Rp^Vu_8qkl%f&Df0CpQzbF$u)Bm{kUpC zNJ+snvm)TRYM~v;7@^WhMhh?2&`p-D95dB?ix-w=^Nnyp>to9vD?PGZdm0VVb_2Ag zJW*lXCWU68xR}^hQtTP8p*Ps0VYcO!9e$LT3NAF;>APg8ZL?Kv;!jr!ZXcUxONj1V z5AD}H>Tf>z$%PJ;SGM|;-|>IWtXYje2`I>Cy$P3J;^R(ftp zq%7^BMb_t)TJ@yqrps&WFZ@175~R%+aV+Of6sMt8R)&14t3*d1rwfi0?G^}b)-GG^ z%?k2G->LZF%{``!?zrPv5Ut8~{@9NiXp*dlGkJf%WIsk{liW;YUX6WgpdR8VbhT2| zAht3KA#Ou}hb3VN6;)!=afS?Q=+mv+DXX}dMLQqxJkSTu#u47(Z#rrX)mllAc9TvZ zZwA2%{@U;HmA*I$J~Q#j)YkR$eL#==eO%DtyQcVEmKT%THPLd?#gOQ-Nb|b)ezUH_ zwvw0l1Pk>MQ%90{3X6^HzE*B6J{o!Krw7DV>y>H#b(k-ZfU-tVF)Q2oD>p!hiD=z| z`u54gScUU9$G-D8NOlSc=Azt7B5zscqphmQHaPdkM>`@j_#lgQayr|Lvyy$Cjsy?p zK(Ye23M_gt94z!9D%e}4IXHP30Ah%q_nqPwS*up*RL)IFLv5}XaK2>L7<9T&N)mh} zJbfsNSK44#Y^CjWaozl>)Mj6qeWV8_!z|B>fn&xo#>+dz<*A4Wp7Nr}+if;lJcobu zN*5p&96E=dL}d&JbSDMo=6^hsOndv%(;|{)@O`)x2;;u_-Fm|t)LZi5Jc=uOVm@l# zo5Cl(cb3S_lUyQudZ;%l$6kjOZKGG7(1Vsij)17X@-tsT+PGzJ9^Kesf&QX!SfgfZX|H z+&KfI!WU;-(ap%~%~+h_NGO)yRk9Ec=AlKn`lF%GE>U~4pCiQEToXKW+NNr6-binu z9t}Oe`r>icX#&<}JsSF0#hcSe6c~|KL~`+>5TZip=;WsWidm>HaZ)XZ9u3UzRdzuZ z=Ao<$n=sQF)_u76#6xjxc5|5amaHst7+&kr0HDXBvObmqaJ~mY__~y zD#vz%CRxFV6noROg*?9YOsSL}NSk$(bkUnlqD{#6h)yEZU#(q=FKeyVuu46I4!z!B zqX>0J>Pvg_Fd#sbRN!(|r6R(v+74B3YWta_?LeDA7pA0JH-nHnZwU+5WgRBQ+_nr& zZ5pZA^uv(bQhX}3fSNQAiAyo@BHCr5+Q>%a?h20b_5~Iem9Nwr z9i51GH*FfwnOE(^+4d1HDYpimuVL!DWv`@g^e^or_#!ClgS2#M??Q+Z80RoBgbJ$` zND$olIWikw;$aZsm6wEF@fcli_^tw+y8*FEpxL)9lVymg3sNV2B7|;A>eapaXqcvY zil&N(b7d2RWNc=H>!BU@+ z1dX^A2Y_C@&%M$SZdn5O*Xj6TkX26u^km2PBep!mml~~_#ygsj(+whi#aG9@4z zc-7!Sf}va|C*NKW`8m45 z2E*c~-ykT2$epi&uXX{?Z8We>8BW|A`w1b-0SSX%`klSP-9rsh1l|rY<1Y}Jp8Txw zXO(9i&r%>l^^+!m4tfvs(0vWvy+334WsH-Zh3B%)=L)<9AtkNpU{vljg;75y{mWUk zF{H+EK^o7;kx%Ab&O;X7s@%%`F-0W<1|1$gQQ5Qw992L1t3;zqH@_tx6h^L1_!3xu zP}j#8w`HaY?_{c^Hler*?|Tz?HV6)!th#&C-A?xSgr&K1pg1H>ui=dZ?8Ez4X12V7 zPJl*jAR;f^tFE28NRs!f#Yi4h)FT$6+dqFr zaM4QvV(>%ULh-x$JNVHDzd5|buYMbED?AnuQK)fQOQPg~blI63i`13)ifoZfpfjXw z)$Z|uqJs;M5)cl(_g|2xg*lpGaGO1sMU0``c5k|Pu)VFa;n1`u%RJoO>L77+)(J(oXiVZ`Bs8p^QK|_YLKYpxSG)IFI^v zv&ccF-B{V@v&8!uXkLL-&)mcgWbd1!&^VZZ^ zKmw#0)s+ZO>dZ~6U!Fd#VKF@}DV6JcUwjwue>durS1#yY1cu%-4pUZ-5NT31%-gn| zJ=cRS)<_+#j>F43ny-+pENUhoA;RfgWIU8!GD{oA4bT zdR?k_TU{O|kviSmRv5n=TD<<>jiyWlJ7ap4o8t~6BEJ*;oD7kL{rXvdpm9kTN4F4% z+=LhDikyzDr=C=IFU^-(M~p=5pf#+ZXPCOqoJeDC!5za}a$q~s;>nqaSzKsU0C760 z3|*ko+`oscDJq~E^I^1#@TR~T%HwLRDr;eX+)J+fDjm11B1X3KY4gg?I4N`i*ONQmtI5Q2hokaHnsQPXO3M~OiOM%&R$F!M$mmW zI+|Zd#DR8MmaUmCMD;B@vepQM@`4=T8JC}3A(;ML1hfpJ^WAGYowv%;bM#7co9S)k zXZ;qzT9>|d_8ioX@%HXlpTw`jK)Y3P;ws~mNpiB1-W=c+YR^ycIq$t!VF&5f>FN_U zr*iDxlJ^G83s?!fy~2JNYf1IG79G(hBd4>qSZRY}%KqStY~FCP9FGS;eG0si{(N5U zVbM~5)mW}s$FW`)jT+q{LESy!fG)g2o%2qHGnGd^U2P&wgkx{GBC9K{biXY?$6-N7 z);dvX?k)Tnq^!6KMjm@yvyW$4EKGYDHgopLm8bt1G;mO`XTtu?_0o9;+Kg{Fvv2kO zX`}DkWU0Ek9;gZb08WnY6PK-p4wY(O!n~N35NFA;jha}nil8K3f6vX2=ge$7j?y4x zRWDC!SJwry(G1Enl<}-oEQOyjd9rM$1S>_>8Xus(gr{RJFD0o*XLzSCiTjT@W(A|V z{8+kIWp{<%%}jO(lcviA9f;vJ4$UY#GkY-N-sIeKm3fwVM|BdJe?Er*R;*!Kf`=T$ z+$P~BDVyW!m>sp*=@ONrFAZk`_xD~du=U$7C3I&Xm&g`B^O8Sm5&Q+pn)jsvep7eY zTGfPa1!n{UL`OvxTL#DRwG4Jz;C#))b6nbwE9Mh+ww$wf{16T`m=bkD1+Ts0shLAN z#zdF)_VsxBb1zwl$%@O!0ZlxuW>OztlH2)FOZgzmd(D$MJfdR5po$vE6r4G{H%MEGA>^w3Wu1Ak zvNC%IFGsyAM560oS_#VJ><~GJ<10d&Q|7u924dHOuCW$pil!HJ z47@CBZa9klNm=?_^(B{5-d@g{V42W}-@1roGDc`txDGKw9-bXEHW3y7g1w z>kedqb%frk$j#G!U})fe#Cew&%9GBTcBckt1D=#9LkiV7R#?4P;y;L!NtQ4;;MG-& zI8CtS+A~r{xmeFFl8$SyisJVb$bAWdn7R|>frUj!yYFT|eYL!#7u8Zr@?7coNMko@ zks?bk{n+k_cgZNe&WSVo4_Pp0@}-@XMSs<~spp42L4H{$ut=aQT;Py*351zuO*N&h z?)x8|FMM4+UF-}Q^4r-(%Fk4L=ITm%jOM)Nx6?85!WKvkWK#XD?X6$P2-^6VKbzLL zc<7pe4ngZQnDGd&lh4i@%H0`}+DNw2)8+SdCv`K@!*$^Q6thE)>+H7I6DubSH?KDp z*_k-P)tO6k(5nWSSk4?xlvRH=v0IDa-rc*o9aymF!#D^a^>EVs1v*OFA^H+Wj_zjb ztqsR#s~!U`{cPJy2j*atVFeiyuP?|1gD}rV_3QF)x+ue%k5-Q1z7MB#MEfeW@?)GZ zB_l=U_uFBD!;ce#8)P3h!PtnciVsW3%?5Io?84K6BcBLGI0&On+U|4CD-X$xY)r6d zsMNSh#2ZPE!-!LBdiOxI&{G+zabddMMEF~wV(u92eg=}F_fM%UYLOold9gH9f%U8h zsKuOtzBsu^CAClyPo=3tycmLcMtd-I+@}(R9rl#;=>{ z`%fRfTP+N-Ymo9n9ggcL$iKANxt`CdbE0H8|Jf!rpYlkV*XF#_x=3M1MS!e+7*af-Q`*>gX>F3RXpcE zf0CS?kL5UFy;v3UP!;B#EZsbM2%Div*3;ITsch_(zPpaQOQ$>GL>jr|QJFa4ln%RmqQ6jsNqpNsU5k0_n6pnXD4@YpvVDQ= zhUaadfr!kcOSaZCskU4WoKF0Nr16@sTi+776n?9K5x@Mky57xa@>vN+=VIkcNf61> z>7X?Wwp#@Y`ht~`g3V~RFO{U8=E3`C;q)jDbm0!V2|!(qT1xU4$Pqa8#otXMJc1Cu z{_P;CS!!UJtQE!XUj^Us-%qgbB=d23lix5Xs#N) zwBK%zhQYBfN}S-s(iEjyU5-Er-WWw+wSgw@zY?^)930g+NPnlN(iJR^c$@T0TNma^ zJ7M7`L66ReM`in@%@2ttbV*(BFu+U9mnM4R)*kj;+=XNtYwTr|Q$Y`?2sx=7`_<)` zeW#e&NF2)8MUWqVBlF7fp?nzRP5U0tdyE@jUfIg@LkvukiM}9kJ=onS3>=ajYd(uD z;AbjipK&JgF3FGgGoT#&$zp{XfA@I%dJFI5nyKxs3*cyA(2uS~=^$*gS9B7Qm;*XX;OcH2sy=Fg|LVWLzQrt}ZV-!P^e4RVpnc6kIw&zrf>N1~`4 z_wAAqMA=+qU}U#zPu@}slT}VM4W0W|}wx*ZdMo7dMgH#Mlp&bf&cgg$d59xb|MuZ=vIG^qV_5T>VXA-;pb zYKaU>cpF5Qpz)OGmASDS$c1arii|F+n%G+xu>%qqJn^^p~8BsI(Iz_JOU+0$8F%oJs)b?LVlJ z|35?a{qwc|wmvv%|uCkd=_VWZtzWBtcT((XM)pP zK&OJ+BR)lg^+jqdM({XU#!XnG1jx|x{^`-iM`z}$d1l^HP+Q;ND0`GKAYEN@O`q`$ zV%L_0A2P!|;kBiR$Batu*iX0AACJ;bOD$*h%?Fyzb#Rx7V>j#ziH47`G^2n<^>jl)Kjan~QU1TE8B!yuQ zVf2q*Ag~p2GYR|*2PxGkW7|V-?;?TQ>fu=hM0C=}q5#EB;Nf{LaWcAnucqKyQ{O2-W!NyQZbhG^UJ{tt5y{K%& zx8M5lR1L;_?y4r7sTofm53x>?i>$T)47o7*(RXPF_L|vK0;0usH7qlp*g}jNs*>Q# zJgw{m^Ny0L;aD*Ps{D9KGnnUufIYuuWTA9Wtp;e6<9~?KEm>HiG}4 zy(LLy!5UVc?C`RkdkVM4oQV#VRpH$ku@LPOXEblx72ZvM0hf;>lMG8B3<*oX_h10l zzKE(#)7F~Zk+s4d`Mzgu-!@amE>UD}rRu;-BzJxrUgvdQzYifKdve=FjOx80*4y1^ zH6r@R@j?W{i^|n)(sAp@RjDg?x3jYMgtWnET^Rj?A7gu6X`Bc!=2VA|M$8YJ=`mY= z<8m88T77*X#>Cg%%+=}f z)m}VOtRXdDn(?|2%3oO7~EBRwfG=0lycnusGtQ*ow6!)4D=fqgn6Cg ze&fS^{DhL|6ObBe2Iw(}v=Ew`H)DP61;;-MEiu=tkdz@LSL(TCn7~0&;_Rm$4NCYy zU-A6v0ZEEY7u)V=oa|_El^#Enh`gT|__+YpGaW)c!C0B1FN%-7^Fs4Ar{}eGboTsr zWT?Nnmd_1cM9TCGRkL3D%Gv>d1|wpF%Y>^pEObBZYPahJnzhHW{D?`cp2^^tJPE^< z^gvQQBPCXt$f@?+b>PtM-DppsU^F3Y5kyR_sqN4sr&J$d1{POS`r2jdl(azlWX#os z3v>1jTGaRFa~sc!6JYjIg4pO~Bw>V_o&hRAtaK)}4?W$!C~$>jzom_UWxA8LrPZQ# ze7w?70NA23r^$p|A=TyqJR@(7WL#lS%+hvJX99=&zp4TnfL##Ia72OfS(&mU}nK`2Fc%FFvnu#u>_- z+6+B1vec1AFeh4i)pnh(l0x`Xtt3fyERq=RltvUNidIwpvzB-jYXOo3R)hn13Gh7GA!K z-w~Eu00dtLmbfoUDv5G&4^PGN_!`-hyq#<2p8$V>rto1?5p#j ze-KoXwOzNLKFuY|Tbeoere^E+0S_3NF_GMiXIl3%M@^(non1SkYQs`W4FrB*+N{*^ z(}T7cN;C7Ot>Jc7-?clPYyt2tp9audV$p!r2SFxQWBZA6Vy~O^-;p_xt9V%NH z80L%1hk;BkxJ?0)fpk*XEQ2>{!Eas&NUdvo3-5nVw?W-&6u((^D- z+eic9^8SPMQs)6?dJCf{&PoBKEi`P=YxQsdN?;ENWsV| zRT3p6S_}k?wF=R76vEcJFGylbn_9mF03|_*qruP16VX=OShIw}H~xTC!A}{HRIrjL z0a3Gm17DdwTBEFN82ICF|A6Ej)$n^R+p+dKiNI4=XaZZSprZG;XQqLgC#m&Lk&TF? zaR7aiZIwNxnLGx0-0t}*X?4@2u?G+c;3X~B_@064p`U1$wLM~W?IkQ%Oz|no!Nzp6 zfclIxfnP52o6s6O+TBrKc2nN9eBrXoUC`%ubafa!v3ykqlwoL7Lcbp{ZBqT{rFs~T zVkMa7AX&YPj@-dsT$h)Ob3)Pbg@bB!%rV;G5)SkP({n7f^#iOSRrLfV#&oZrJ9dx9 z_1;>edv?rd1Z4&3sk;X8eJy0~7)l^*QdHn710){wA3&Q`(Zr@lFICs6iy#_>?xPyShx}4~P%8&EvMl+Hkl@hCuN}o!p+DwsPD54# zWY)ht&j7(DC6d*f@uGZae-+WyhI*26BzmYazm3Opk)YwqYM0jAKMOr4CJ2A=$9kvt zd%dfZi-Z}%wZq9=>e>adOJy6%^2-riA-I>4qi&)AYi zIKqN=+%6L-@aZO2pdnB=IkbCOyC_Gqv}&d%;7(}yIcNRs7Zlzuhsh=mAm>2EpJ7fd z(ASz@(9Ri~=8K*06`}G#Od^U-l|*r?(a<2uZJxwE4YO9mIiWSBOP;2Ncbeq(NuKz|U{Jzf}%xx$zUt%g-VBWK_Q%l>TD zR1t3qqYN3efej_S#RtE30wb3>0$C4lq5Cy%RDtM+xHH@$z?6tcv<8pW{pE&vno@1# zrjP>l>K1R&xFV8f%q3WI>`coAEF{e6x&5V^I%Zw`ff%1smm>=A0AyU_H`&6E@ejn% zf82KWzlQX~Kg8SpJ)G`;gZm8eleb=&7g8r&AO1ydRPJdwAcFY`FjfO;18J?gPc@e~ zyyJTH(n&l{6)$L;(iuA=&PbK|!93>sOhh zJb_DRyBT4^ep5yUd_&9nz7D0AnZk6npWB%Lpkgm+{`nVcA`y-|Tj`cTlzw=X zmrMZ~W=I-geJw0*fLkT1vh$?Y@$vHybb=Q79-vcx#;D+~#PgYn+@Sj9-qV^x)8qY8 z^gnuxfSIUIUOg%(Oy&u%wnfr?bFPEO-F;E9TiHEUD%sJL6ppa>*$h2%MhXM1$BSK! zQ-f#oTa)*$x76CXXZ4hrk52t{(>l1>Yfis!)K^Ho{Z8NdPPx+heDZpH+iUXdj{d}8 z^0ZV(8PSw(^j+5EQCe=nocMjC^Nk5h+&!Y_L6vhemetH)9Br-%qs%j-hEB%n;qSf5 znBsEsOfzf!nCfiG4#QKw50X9LYcN`qeyNqjnGiFtqqURhV1p^3qvP&BMSis<)*{@J zG&L?L_b4w4Tri`#LzIHAdHf4hxpboWfTJcn9QB=pCwp#S3Er1%aPk89`u>Z1+3vN* z*Gq+Nn88#b3#S6M^$WuFfevmTV-xq5`T8HLq~&^DWu-cn@E8R_9>=N@C@&<#irOT6 z6?uChWIQ~+TjQyEyn;DA5B7W9sNSo|a`>SrJ9IjEh=dn&LEc&oS$yA~EDl@^P=vB^ zhVjR*1!&z1PrRlY-5leRW7MswNT8&K@p~b~vSPS9+G7%K1atK}(=QkD7pBCQUHRl& zZPeZ~7;xSBz8e@3{^?9AI30>NHJks1hR?8GlH)LJbEpGC6Ay4eAUVzuLlaOr3>%1kQ$HiVU&spj1}quJxdx$`zl$L&_i{Qd%1nsx{j zS-mlu$Rg&nPeJ1*m9%pv1@q%k_H5!Q>aZuvJH0jEcOsjA&q37hFVsEJAZC(dDW&

    }r9Wx|+&gCe=cV5}Z{jgajzB!qDK<^Yb|B3a?yo>k#;nw_*_H=8|bKtFqJ&fw$KkL&0;FFDA!dnbdQ2JmLflWDDfrL>m+Rs zW}%~9z$$2``qrcz@Gy1ZYr%Oz*Vgn>p=4n`bG~Cm%I#cOT1`JW>9%WK4xWhlIHa?; zv!rvNAd)fQzMZT&1b3Qc&LCGi){|u7m3(s5L||4q=g{-yLoDmM*YI?Uv~H7Z_vB9_ zKbC`$$-ZtR$aw+9COy1>M`cMqKW%O4PFN`k+rmv^6>9R#A!6CgqT9EY<-pqH)Q| z54pw(eab;0Fyo~NI6w!v;JM0pnmw8``s5^8L2uOa(9}G?R9@k^Z=2Av0Y>k+L6vt& zNr}UT&1|FS{O~4OeB}z0$qjP*WL;Uz=3Ro&z4~2!?RE-@k1A5Wgo!m7b>(P-5MG%_ zX#oSJvqpwVElSMwHEF&ojVe+6Da%Y2R42jl@jz|}GT)%vX|X3czJ6&DB!Bn%;mq5_PbF39 zmNKg`euJa^7dkVUKxz@&=nWu|LYhxa6j4kh9AQnYHQvgra!2=}+~p3|Z{}*g8HLf_ zADr(jw5VQ_hehUpAIP;PvYF8@!SS(zerpd_IHW3jMvt7GRW8W0f>gFWiH|+&9Sa7B zGHrb7Z3a`Gwd}BZsQEsg4ECOF>!R$r@Q zNBg8S2Nnu+IxWTh=O6P8k(`$Nf1heHFh-r3>?;2|sWB&#@z9-$Z{yt~t>VfC?@6<3 z5)~(w+w-{l8!m3r0c%D&_FteOE0M!WNwZ&|v{Bzh^Usg|eE82X`7`57JOC|VTY2^sRcko1qZVHz(GoFP@-4BLZK zkTV^3Gp7+89_mSmqTY;-!Hf*p^`X;jA9sTzvs&<|AD>;Qc3?4}w#RgmfBjyJ613?~ zO(x{FQ9QTB4H$n1bGlj7Cct+HvXkI$B(Gy?#Cpzld$JanHv^drYlBG1xMaVu^?oBs z%~|u4HIo_+8=mlD$=dbs!sCw!8?v(nh06YT^)d1Z#py-HGd*H8c(I_m7gy1|*!P2K zp63e^V+pFV z!=x&WH^MECJFKBeZ7S}FyPAMlw5E|F9`()bc$hlrB1Lg&WjmcQhyfa(Do3!OL(T{j z)#m2ONwvCqtm1k|0%@RF*PC*}R?5Hgwd@4V`c8nBp5e+7|68B@D<#WUNz=r^(2 zZ$E(I#`g#3(skALd8Ernv7LVe^l5EatrJhf1Y88;qA~!Cm>io{6WwJd)pAnTBDpS! zhQN{>T}L86%Jk_k5D`;=jx4p)w6D1Y)tj40N15Is-E{5KDd%KjZ|zOhFmxDOgIaEd{ehUB7I}4jr7B4j$+Ho>L>@@%BT+A1w|}q!_d1-G~?Bh_@HoqdSQ_<@G;z zA9PlKGIhj>Nii9%x_t<`^v?1FJstrokG&b)<4hdgf1T~$I}x81MP*NgLRYF#rd@7i z1kSTMn$qfA*Ae|lCD*a9G)uh!1WeGAf~Ag`4D*Af6wGVO_G=Yk^0@P6T$ZEKN@kRN zfF$rOSIZLqX0VN%7PPm_gSQ?^y%y^cCzHR8$FDQr_=QYG@25q|q4~mQ!CH#*7H`I* ziK@#pJ+c#pFAox*S6Ltk%0!L^#@rITk>zTMe}T&PbHJSKp4Tsx>Vd<9^oIcO2U3E08L@S42tOPA)y*G(f4hI5jj8`nMIWMZGyu7OiltBo@&O1++q`k76z%1eqOrrevWC0o>9*ud8qL>t8C z%JY{7ry>f2v_W_w!N+>3E2qVa)bH>wJe|JirpTXVW7+wgOZM&BX6QnG(y9ND4GO-~ zIWRRl?~8WWR~PwldI+0d^iIB`?W01kKeUT^(n>IhDSwLe^HzFgNQ}y#E7^v;V-liT z&GfQcLmK-Ylgsd%MTni*%c@HP~XQJTQHBHJ1hza*`k9}^Of9=^X z5CA{0ExBC}82UVD|E{~imcsQx7Tj4*KpVzAlL~-7;-aYmLvz6W%cK;!UC+J!90;Qom<<o*(o1K5E4=K!ZGV zD7;FrJ^XkPIw*5dB!Y45Lv97gCOJyo3xYok$yWXtT^(b}Jl#J2!*;v&;QO0Afx|no&2`4%vt*yKZ41n4%$Ja}%*N|lmKmKn<4XLK@ zkAr{G#hh|kbP}!K@&6v!#I(-Ky<~eh(%pz1xmrTtf-Ee8Mm&nm}U<016IbD+& zt{gd0WUD)GqMu%rFd4L8t2dfMlTu@K+k)u{qBXX}9{M?2+UOE2u)_d~N zwd&aJ_hC<&z9rE~LRw<{Wrha2ORbqKTJA|F5ASCSwimp+T9R+C@IQ_P8qzDqe%9+t zco|;UCVw6kd*~e}gD=SO3-l5i2be>~_-?)Y(L^GSZFwDDL=;Lre!%mC1WFzbhXyaB zqqRHrbH|ySyVEE}dTn}zZ(e?d2c_mtZ?g*Gv0f?lGQR7tf$Bxdtfiff>qnfjr)p|D z1m10=+uqxp>=QCmUs|n%?7O-zS~WbuFVpwW8|ECaQ#(p6s6?dx_yIN|46$;EA(-|5 zbg;v(;gYw_t^F3e-2^3`q@rq&hr?_()}6MzRG92ie&cdMHmk*)sb=3?jc#6S_FDCZ zCv)Vi57M7I4-dV6){2n)TBp$Z3$(XhuhX-v8DpW=;;`||(DWCGe=l$3^iEt%k9lK) zsB=Y)wSbF9g2%; z93#1RnY2tJ4`@fs&sBF^z$WQ_*4BOXKQ^=nk$?n{wDEvWc3!rt??VCR((|js#>l&D zWPf`R)LN}Byopp7gnX7#BwW_a8+*5YvUW4U1KPNPjwBc{~CqSEengdd8Cma#}ZWSkLDhT#-LrT@Ps={H#{P zLguchL$SJ$*nQOP01#DAIbQY>?+gd26FxLNE4qdO0)Zt9Tnm}GQZ((}+qetti+VYx zcVPKd$+3f#iZD;_UbC0Iwmd(k`dIS=28B+fUe~(6-5ovet7yC~a(8x}JmMQ7zs+y| zaLd1*CI+l=Q?59u14vC@N>rby)@2muA2WTQIS9(pDm@}#RQPY~<8iur7FQs?Y zMX1PDz^k@xxB%`-#i|D$V+u*6*h8mnm02BeD=XZR1|y2lhn4{uimZ>6LN~9i7cM`y zWq0s5e=dVCtxJAs&#$u2p}Gl#CeR@UEG!8`khrkXu-5dJU@9j(+IuTUy<3Wg-Rxvw!?`0bt7h}xkq8Q#qy$sL22xJq7W{aJ@M7@&Zdsf+ zZMZK!DEDn<(IZpCyE{tWz%heo60XMXXepxooEl;=$r_yW7`GFYx6`L9W~FEn6J8#h z5`XzlfSlA0yL4Zc)nqmbmWohbf97Z|MQ6oFfjzBs`qGcDj99_}=&YfAEt3dWRCpS@ z4-kg#k%ud>RA9gb)&VR(A@a4o_i_aAeu6U+4$wQ@T~-X0Z1$JP@c zKB#RSs>#6cU4i@*ISN+DU)Mgk8zt5ZUT}TQk7-+5Y=Eb2jper39)VX;Cj;hzMIOt# ze0$qqWf!wjnv{9r0Wn-1pCs~|tKzCPV2&y%N^T$`r>2N%z?}!xZS##3BQo~XLsVVuP7>T0& zyV^PbNKfZ~;<5h$g;txgz&KoaMD%GG8-V8c^Wp!a$3z=Y=l^el zSZL~tJ45ii?~p$Xr~a)-(BBIN{j>Wh`c=sqH`&LzS$;390r7YrRfxesR9`|ENUrE^ z7rGu)_`YJ*?C?4c$oChRf)6C&&|q0%?T#U(MP9!Li@&zkMad z*crF;oeJyScb&I|#3E;M_ZO$Q#<^d3`lD_>Un%|Zc)_?bQj@OloIzg{v&aFq)BF;ELKQYB5Q8IG# za$7Rz!^BmW)T>G#E5ZS^l(Ns;*$=7a41 zD=y(b&gUd8ADrPv0J@sKLB3nYqasJ#BKHi{nMsTx%9hkJ#xdel*^kvxG9S9&fUf_Z zxkq1I>m`#ZW)InbpH6L}Jm#qrZMow}0*z+@AA?6^qE)iqonS}%Tr3bMVMp_#kgIg+ z|1I_WOXd~n83Hk~Dy5AiNK9}iV2CQd9y%NH9Gd}&jl{?*hL52$Vx zW)*4faBJuOsQpnNw1eCdi~meG#kGj@e|rH&H93WQu@m2##h!_}0)T7XBW!m;d_@FW z{hfd@%1ZnKMRfx%q83+S10_uO_afx?wdou5$c}&AieF*A(-t4~d#&{gRA_h6oKeK3 z{Z}wdff^&*ZjfPz9iE%|OgPSuOH-V3y_ijy=A61Z&AR~p8T)Tj&aTl7|e`r9XYImv!RA0QM-d8r55{NrI=u!2wMZxt51D*Ijc5@Q1 zo48d@B;rnG7camGHLaG!DjjKCvlj|<<`4c{Rf4g5L)+JOe!VjP_(qO-^5-!=RJ6q) zSLul5HmE@lAlIq<_Rib%q7bxzX?j!L1Lg?H%E|kWe}M*1V4F<~Y~dy|enKzaPVNbc znNM=d{{q$jKkR*XR22J`ZX+NlQ6xtN6a>jR2L%KK1SB>|k=Qmta%=+Toq6Y+JL%qeYu3!Xb^oYE=<2Fps$Ki{?QehEh4HIeFnd^s zo|O#AGZSCsk=uN%cFD+NfjH)KZfbsB$+QjvT;%nl?D5S;rh1@!mbARJgOfHz*O8e?nxTv4&B*n%LL3X4 z7``2ow*V#T-?}2GQVO>+#XK0kzoZ;Dq6?vrhbr*UPTgMbIa!U23}4D)#|~VJ&o~6? zFb^T7H{JJ2TQ)bH>E9Y?T3w3<-9XDzxv{XUyHm&|*%ZuyTb)H_5(?sI>(U0^i8DDT zuir0)9x>25Lhn#a^}k+p9t<)xqo@qw3$1-|kN)jso2rRh_Y&4iL~B0i=H5?`Ce!=Q ztTm>5oH>f@`k3zqJhs+)2jnKS{->cH12HX!Im5m}Kl}`o!w$E()oUxPz9l@@hqX2C zITrM}z3k*K=@v`Nyos8u{|Rbqq(ST*rOk!=$_NZQZYTA{vWkcN!0j$`U7czG8J@lw zoa~v`cQ>4m(0`j~8Bj=7n_T@9bVK-qfts4O3I#-ExMyOHIh4YNBHjO47TNiy_BCwZZ8Fe7Zq+Mzah)Cqc_oq=g+m8 zTG!!as}+hdlB%x0PF_8lf<7N)y_lC{;2++G#$ms!Jg(M0zJ z?_`lg^@SAVYB)JNxDn}6^!iaQ9c($9%6QFw{dD>hWF9y>dxjBWbT&~D_i+)ARZHD_ z#M(gax)LD0+9o%#k5L`@gce{KZy!uiN?o{GJ=4E4(E)dJUZW?C`w5y%r01z{XaJO`B-0r-sxKiwRZ${A{qXQa0JZJGoytS_7Wo5;a zn+5Ul5D$hrKruBpC0h77cF;)c!nItn)4T;Cml(vI`YUrztKA(^epPPyzV2gyiqCsb ziECO2n5`A8p_~y**9LN}-3Y9cs}SZW@$Y zKue4HhNK6|evNPZMN2;08YF&H##Ip z#&vVVwVn6waQIq%=r$jUx`i};6G!%TlKt7Z@|YNVzBC1~=R&|ORE1DheLCA7Mjd~P z&2O~c@});?w4vbPOys*67gLd_CCRAZhV43s5YMrPvYC$ zE}6#y)S|x60m4m{JFps>C0IKm%jLfFqwyYP+Qr!#?{#?7;w;(uQd4*G&HKP~D*jrm zJYZOnoN@9mGXAeTKXp#PC71t|`De8*NmUZ#mOl;d5|aJo&E>h)@iptE23v@h%$naW38FCs(}ca9vCIpU7pKQ7U> zUy}hutU4-B-EzRVWK#&cVtHB9H>hCv;$gF8s(U}A6iAUB+jB_85;Ic^nfkrCr6u(0 z>;p+%$+X)s+^0d7yN?Shsq9Sr<%7}tsPB(D7~L>jKnk@^i)N_iF!XTTXp;6ed65ME zTUdAisSK^}*2E>2uA@Zgs3gS^Y|OLl=!ihI$D zk;RhaD&F5B=GzlrJjk~1p!6_**f~utaRADTx-n`rep92I@ z3J9&6s-vI3KAI=)VpdMlns@pmYSx{|69ycR%mHR$b+0gd8>>2~w8GLW01)ePk zSBddCL2z)gH{(RX5MRZfg~8oWGe@%QBy(Q^Z9PeZx3| z4}U8lLU3DH40{8)L{;)w&;NdJ~a5IP8$OpQxi=*n3^~ z`HCBj?=J9JiM|6U7>=W-B9AZmU!T?T{~B81$^}Zq-8yTUc3iTbd5Q$|i&?}jqA*b) z=l6hdqWa^z$$Bg?;wOkD<|oKx;D8b+t@EEL+Wns{;`v`43jg17eUp*~ppFU-cJ)YP z)Zq7bl+_SWAjCiekb7g^gQ4zV^?Fgi|M7J2^VpCHYHjS|f%3 zsGK!`BY^&YItr6BFhMInLC#IFMQ*wPT=C9#Zbxii7^nT0zoP$#B~F!MY|Mx8m%DV4 z>wR){`9D`b2cL>$bOsSsK0Pz7ghwbitTzADPqoAZqqS#Iy-!+0jag*9C;AgK2X;Om zBrsJ{a&#t{sqhdT&i>NS#8>@0`;L$B&v^>R- zVLCNhBPPPN5_zAsI-2Tz>%)ytWgwZb#pJ1faVavvL05)SO0(1OmDN@(YvqM3!UamH zpR~;OREH2m?tJpKgZ4^g6Oas!TwK`2(ES7*3;Rm{1br;UwsSXK#??u7lSf2NtUY%Y zd8XIS+vV*pK5%M~aU_0P_Y-s#b{WO8S1PyRjP}41{{&soZ><#`J8wNZ>i!ApJ7cuL za1WfP`U1F7kQ}x>=1<>qch%ln9G9&zKWk!({0R#7`U%pFiD=rh`3V{aHb2)7uz4&r zupa{epSh$zTqf0r00l^@i!A~5zW#Vaxj(jC1irvRYuiW{b1AzihOd67n5fYPWOG|Z zp+ooeu7RtJ8Q@t!MZixJZq;h6l?Bj}`BKOk=PBIjgW_%?N*x?KSO0W5VOTSOwiI-< z&7A@a(Je^%=e&uIFMh9@4*XSsf%;<;v#?aFmzuJ?KSAF!eu6Ab(Y>d53M(=3Jux+ST-6kPEmL(+vO_Z9 z2lGcz#;X4Mg?Jp=ZKNC)3s6&=@x?Al8h;H}*iX=cEg*<_-SM2^x0Q19C#V(WrNrDX z6Z}uZ!t$pZ#QeEe9`Ao&M?rsm9bh35QqCQU{X#ZI$vEi}PjJ3MQKD-|ccYC$Q*QOiD3xe?#9!2gp6^4EHd7?6ZspBbcLL z-p1L?%EbV8D7%3fsl4!ei6YD~c8vFY67+=aCX#hho;`{oaXb z{<9x~1e|_iq$FyQ%GW( z#gnx%iEZCd%3q&sjJpGk?aH6{5Sub0AqQ8F65i^|GN2JZP|;fp|10bK*S5%wj#AG@ z{KMsMFQ5OWKK|F4+R-_PezJdF-ui)8y5B{aBGoHd?fCe~3cFYGdEu?>DL!Xxe}hrl z4+m^V*xz6~>i#Z+{l9D_{=dwU;330lnq_K&-DF?(5Ery*A#rvtC|#vQrl#^~IxU#Ww? zpU-T!mu%{>Es3OGI?w71b3otC+%ZxR416jl?|yuET&ht5d;Aj=+o1=P>hsIMM%I)B zrs~B~EiFqm_Ur>Y8a>+I5Y%5N>fdv`sy)HHA-KpuqV#^>WeMXX{n{Fuid12I0LR26 z>C8ar4}a(?T7`3d&IfM#1i#4f$YxXbspxNH*}o2$FE-wto&|eKOt$a%s7|g>-h6Od z8B|U1&PDf>+qkx_vU=#qT5gKok}!|`fD1+(gkl3K<`jl6-L0=>p@tR*_ckNf`ECdG zN)dSlI82R*@0#))=RM0+W^Zb&tcXm#nyweg(wYoHC@DeR9KEg#K%{k>e}dc}r3z+z zV$?H034ho&-q6tLf(}1mP(u-ti*IDF(gCG54)u)OxC!qEZ=ouOCdmoYmbK{!E{hkk z5%q}1DsY6<2Z!5^b?m%MS_|Tt*YXPuY8yZ13i4pH>}&^4@D7Tt8b3;pwz|bq4Fz!% zhv>dj0`dGkL;i1`Yj7LQyRw05CO%H`pe{r7-dmaK)c(ec^PLh0HDm@WI91mh60VEV zdW3ebxieZ}txf&Oi6dM->&~NFma+)sTJ4r`Oz;*WB{zDM`r5u&ExMtx6 zmW6ibpM};>j;rkRh{q1@2lzQbQmWBADUOT6Lvv3?bb7(kanlXmQE?#vS<#8ij7%7~ zr-^J8t2$b&S+IQr-OwOC;*#2i# z=Sm(dnD&j9zM%UtV{V;kl?Nrbqojz*ap+Dlr{}VKE9t@uyXgKyt z)Mzqc=De~@rLUpZes4KYede? z*GZf;T?f@D=~j+bZ1#NQfx7|aZ&`uj9+bM=f`0N6lwrkoapoy_$d{97@2TRA7c6^> zD*?PihPB5jx#lx=g7#ujj&~Zy<3=AY$ca97s+RLvlYke3{sn|pDpEeQQ?Ic|c9R{; zhALhtuXz~4&Amtu(&Y#wTi4zA;u-TTaZ#^w%mI;hmqK6^yy9wkfQ-X$>q;4DwgS*Q9GsDXw&7>5G+oqGDF~W{^%gws8$eYJh-a%EB4)W|;u5=TZIe&6>l&ZtFY5%D- zu*EW~Z0aid`sb~9p*?J$EakEts|+9gzbu|X0=i$e=XV%FANt48kvs!{(HY(Tb;=`e zonhwpziZbNj=?rkE_c_s`hJfz{E9aGIz0QPbumv)qug>eoY}pp;~u?;T=XJf`6#ja zg=+2I-bu5n&9Qdo#0PX#8=!S_GBYKT znMr(V`DJeP(r)B>e1droQOy=mNxv9FsXr3hVJK03zlpk5lBD$Cj*v)5*CR4`rN9cAML%&4kg`DYjb2 zzB-t2vg;4Nu;j4>KiKncpVcBijXPV^uMUUtim@f&e@OLn0yF@JfHFi7lgiqUye}#) z?pqt_tSeT2pSF1CAFxtnmqA@GP_VI{$2KEB%>QUP{npnS8U9y~2CAvLA;vOq06_G9 zX50+1kaBtFhxuu4#@HimgGTZ1l-{E8-e2>pqc@*dGj3063LRYukcy!?t!?=}Mm}p# zKHbAOe5S@TY8Vvg)k7*?tf94l3O=ku+wPlcep^tz^;S^X?^RE*zLy-<7zK(>u*JI9+YH9+E!4;b8T%ER5nSjrH~RJV-+3iy zh#MfAK6~_WuV1+AG<0E;laifq)t(g88p_fiAQ|PeU=w;sqNpKjj+ilRb-u6vW#`_} zeR)ij`Ce00g(D3+kZ9Lg{!obh=3SKb=JbujyEUECHZUOBDUz9Vuhqbga=f}B(JIVL z^CMf4>k&|HL}LCyejB=R&EClvJgfe)-FExh{+XW4#o<5$$fL@fyh4iQhM6rY#K-IY z!huz~k~udm0Vt?CI&W&X@yMAAiYID&QCRwZYH{b zr{0~2iCT*b3OZM(9%QJkFUMz2wo|*CMhZ(s8QXyfD&=N|qu8q%J2PcZ5%#LsIYFG^kx0=3Tuvv*WEi88Gl^JAR9HwK1rwvi zbZT~FxW)9Vk@X%-jqz85^Vi3iy;Z%O^IKV^p03*SsywfEQ)7B(*%KD<9t86~sehFYaiX zB5j^WrzSHXP#I^}Qz_qbBC^ zQ(f`vHEUXMd)r2)6y;jGs||&7iro*CVEjo0UkO0b7@slaCaZ#VX1<`-b7*Ap8$QQl zNRRE?1JTvn@0pv_P(!{-qUEnTg_-o*sdS9Nrw>hzK`#KSHCCKZgP7jV5Gyn5pRPnf zG3V#BFWvZPJ>#F2#puE=8Und3o<9JVUBI zER(Q^sBo%TlL@{Xx8B&TH!4FTrB`^Y!J4lhw2fWnyepaRt}z8y9(*|^@5w)ACu%vdO}*s*lFIbn16|9 zc;lerveWK%PxVQtnDS2$1LI0m zo_)IT2nyVYzMA)psznHFygovhD2(^I`bVO)M}a?e`?J9#@yM9l8k$zTtM|t{IA#k^ zVNX!?QrUbV0KCN5P~BK%9X-4Ll*m9wm3Y`Dx&9W)Jb5?X@#O8AoMe+xI_q#hVt(pU z#E{!Wu`~OSWaw*6^UcX`!h%48{PZv~Kog>aUq*-?Evyn^!g_{XjN|`87+ZBlL)K{zAISfDpe7#BU0g zljxAYmkbNxJ<@t}V=b=r_#}n)vt}@%)w{AK=77bcV(OC^%aQnz#3KC_Fc|9N%H1Er zbQ>Xz#>Bwc&Fo^fzoxjqw6RGTVQPS62kD~_1J~`6(*u+3O-(rM5-)nkuadfNtx6ni zn=EHfx;m>seD>O5RfWMWq#ucz!N^BW{E$J{wKx*VS@EsKvL`{8^!!&^weBd-iXRsS z)v2qa(1T$E`1N^Hmm#AkImXmh|n@6vfoz>SxR+hMu6MTiBFP4lHlO=So@4; z_shq-DbK!E>EJp&+SJG`Ns6UhzE-W4Q@ZPZ=H>8W#$MnQV76-3jMN*5!XYx<0LQ@1 zGl>}2iRsa@6YTn7;g7@iq21C?(Npui$GwaW$(tG>_%WB6S{%&Hb=`&5;<*+2sO58f z=Ta|^EX-qdpNN8`5rhwHnmvvp_HmkG(6`RRqmu3#zPIjLzaFrP%-aqX1^WAU`(@Y9 zLKlQIeayeoLL~NOa`MjnSxrq%H()omT`wRXKbMa{``U&(ZfmnO5yGRh8cCj77I?O_ zCg;5OhR>$t*p=Lpdu|ce$1A9bVZ>SE+6~xViC3>d@v%*K6!kh7Y3=1bG{0Cq8@=-H zSWl#D-9#E!x5JGpVZ9{y(b?{EQ?5mwz1_(9y`z;J*(^&RhvCUSSA|4TvqJD`b&)@9 zT}?GwEBsmVf*ot%hDaDwm*IEKa59C%!gJG-tOYWfwgv8OS*UD(|IDX~z{M*D))&3q zvb&7W3;nXR_wph|3Z@<+-56vvvJ&a;Kc|3NRC7&7lDR0lm#;g$e|nacC&snifZd#O z(NeFgiC<8zvp=~hQm_9-*jXs!EFwQI#ZcbPm|nQx=1l8%>0b5row@jtz?UT9hBp{4jmkodHw3nh2FvGDNL*Br2nL4DqKIuCcaenA-opLV%VkUCA~JkE z|1(0Zy8~afrsrdV8v}C0wWnwT=bq!uNAkO`lNS!t$c)gk@mCS~o=H#3s%yE)M(^9Xl7GchHZ)=g_ z0ar||`s(r`lRXDy^Xo`xQ@MZtG-`ip_4MP3F}7UpwMcMvo$IJDcr+UPmbTqW?^)N> zhES~Oi&;KIXq&ftpyl*QBGxgLC0P890qqXX?&Q~PX#rnpss*xb!j9wg(v@jdVjK5n zmF^&>N5v2g8iTq$zxEoDs8(a7Zi`GqUHiin3mnDmcK>HhG2RhfOuUpfK2tuvljeaQ zR&(3jXVM&tPdmUzh?Eo@Z6 z+P;Moh~to9ydXkUda6`;b1u|^vC zhTxfNd@p~<2^UPoEPUTKX+%7C(;LYd@+IO;!4vyZAKzsitEs#ruY|X~qbk0T>-e>V z`d;?Zp-1Cf)RCm)TQNY$YL>;6>qncj$Y~7zg!Vou?(0#P$*hJwfRlV&i9Fo)(MCG9 zb{$99qFiA)-1x{OwVypolYWG<$D<~&O~&$b)V zIA7(AoqK(xKYEjN{558~D2fY@z5948zgLQ6i*&BJdP;p|(AP3+d>zY~Vf+zbyx`lx117~<;8lT+*U=8?C6)+(e=T1NWGc!anDk81oT@|8dCStpq-=8Q}}Z(*C8YD7d}%Y6N_7%#r8*;bnQp zlG=ma{nyvYUi^r=DwHgIvF&7Xs%u;~>cE7sdnjmB^-Ggk~eXO2Cr|69PoCkW-_sH8~sG>f-Y1SVB7B=Pn<# zPuISzgK?b2ZPa!gcSMv58@VlJ*^S*OO&-$x(jy%y6wvVwbeLx=S#Ie0&GyD-sNt8I zN+AW|`1@te8(#X($QuJ9V51?qGT&!D#U`G%kXpC%+is62>?~rH#q~D`58M$V3?qDQj1>l*3S)HklU_Vj<}#P9Kj$+YO$zp{ngK@smb}-*}WUc0-Z=> z4}FxwE*7V&qp)ouO;-o9OR1V|IB6o%fM`HWPIWH(AAFU5r=>8A@MVB&@s<{@>DsFo z5UOZ@0#^ zTQfz*aWQV1^2|s44M)PIodu10ZdTCxMvt9W>GYK-U-i4?#OIgWC23c) z1aTd2AB?XClTJ(qSc1G!y7LeX7tMI>ucLPpcZTn&j63AT8mp2=Kd?*#sH(E6~c$E~0$+JZl0qWrwa+h>2ruZ)BHkfG1(#xQ& zpP)mlK36J%@v+jjee*bg<(%L7&j7&R^OHLP)wFa=ZW^0gTe$4DdqzlhB3F<}T>Fc* z|2vnuCi(Z@{}x{LFQn5MMhRQeVmZ2S1ge$b^F_V{3T&sC)7UvD)qOFoa^}O-gJI<7 zrP{{+f`cXXN|_N1s<&1BhbU`4ysOs7J0wV+o4gLSAY{X1KE(MBo&#@~Qyp$jsp2Ib zti4=osTRB*ZYk9sp!7A5OwSdlCc_7? z3b`s1qQ-LNK|>G~F74DipOba{7eL-um@6En-2nJkAI2u%f42MFq|&PuNs%LYNy>>U zG{2rc)z*0TE0o7IUnQ3RtC7A^*ayZRrHntE?q!f+q;fX&jS*v6HYA%&YV&OBA6g@5 zXZi;{@lIsCz4cc^>iP25uy@DY)Hm05+Rl|i=}2l6X7(4a*YDv->kSav8kn(+?DFg^ z1gT)Kf?#cdgR0Hyb%UqIwn6g$8~6MLb#t z6+ZdOqj}SwQM6bc8#;{5T)knA$4EK>za)BCGkgU1KK2O9Lh|a{22!X61o%$iX5`vc zCX@6sOCk|fo*MAcY$w($9)WqOk(Mk`Ag_TC1XOgV8>(fvXI%xKo9!I9GdP0Z>hnt8 zu!kAM=p4R@Wn}m;KMIdd$se~XaBFO5STh!T_$)4y`P=Fz8N0z#Seq>{=K5r4}4y{)dO+Kh1P3 z;aDj72|86w>7H8gDz%xmG1Vnw+>^pKde9M$5B1RzcH@ zy;X-ciz>BGI@XcNe5a0wlPKQ83yLIHXFOt$p4NlITF!gZ*(=ajU(1BgDmjb|EBUhY zxN=C~djW-hVi)=ZiRRwQ-uxmFj{NQfXEf*Jr1IC4ydr+C>kIv~?oD&d49^}18_s+$ zc)+}A`^IM|`g*F{;#e(FAHkCbw0Y=J7<(nrGvb0*g@)8NPh=IMKN1vBaYP()my+8Z zyC26X#*l0o5L>Zd$$usPO?r^v!9QW1zi`jL=ODO=Hh^JnYb-+#WKmaVW`yc3Wk;F% zqZZ-bqZOB6?V*=1?=V~9_p+-SDZJzImit0 zbpmCQ@uMTA3^&tR_7?4ugLfhvUs^*L2qWS5%oA{lBl9=x;cZB=QroI@<~_}t98)FN zFzHbBeKt>q5VDHY#h}hbs_E1C7L;UKUwU57EKfT81-L7ACoAT} zqZiq}zWEe&NOZ~`{Cp(O%bN6&HNKPR4`zQrYRDMk?~n?6m{_U~qt92Oo`T!?qeGrC%H ztT=wkBz0*OnqCL0^oE2vlN=725)SV(2=fT%=UudI1F-$7;!otJB56n2pMBI5OlQF{+S50_Q)SqZ#V{|3 z@0J`A!V^NypbH5(ZUnI*w-wkn#{vZEAvxxn>d%}2K?idO1IwvVoOje+oF-fhH&L(&BG`+%h$z>;n?IdBeM><3HXCz zu!?RXUirBzZt_gc5U(}5ExNLlyt1l1Ze}pPzMTKo>qyHOV^H4^Y$@L}1=|3H#T!@} z())+G*e&u{8pbY&o_uLxw9nqf$LUuVm_v?T{uU_xAsD7u3i#Yrqs0|SB{^gpwvDPk6#zSa5CwX;)#>8)JabCr3%FgR^RO1tPfWTaQIk5{%1X8?1( z0jPq!Qi@@K6nToWklBk-Xg-gUuj?3C;k>}a50uJmz7GO+0XK@FAxSn1Q(9Zg3^!)w zW(zNT-^5&Oh96xA>JaXE1Z_*-&XuVbM6~txH7+c-6Gt6x$rK5FGb2Mv^LP~Z_=yxp z`0>Zd-Tv@SaOPDO(zVYW`Gnwx*N<(*#f5~7uwjU{6)~%5WVzRy?PuSnq8A(F7PQ_t zVl1CpJ_#uN{}59`w4G}(XxyqNqx@F14OUQ(r16t&b8zA^EMYnS_Lvg-8!!>Bs>eoune z;fHFRxNpn%+&?*@)ec&BMGDH2Z_>BZ)qiu(UxId6&Ssz)ngO|l!wVumS%_`=>|&pb zQ!fr}2d!?O`}-#ZcNVqqloqzRCT}}I9pSRJ9Jsd_iG$_{aM#1zLccc(Ba82RL9>K? zO7H05`U2)9?qK2c7>7qL%D9-t0Ch>_NHIKq^L(UTc>re2-sdzNgezb|PT(Z5P`mNR z+dQ4P3$jhE z3pH=CWt{|l2`)0Ej+Y%E)s=rmt|BkX|EXV&48wkbbj2?ju77;+-##RIJ@&GU)nkZj zatK6SN)&r1yN0AVg~;OPx@Z$W2z-C!H%+y?6Po?(VV@+;?1H+Eb3o>rDxF>Fp<-bc z2*i&swS8}GEU^a$1!?-{z_kHE6 zz#5hY$UNlkZQ{zQ&xVTu%ymyaPVUzJHi>#1i5b6i^nUvid)#;p^8L(`%JKU{$L^S4 ztM^Zz3ld_7)1%I`fBQK54~r-B@BiRBiss(2ou^~9kI&PlUkxwdWz#BG-{+f_Y51o; zfd0~sWE5fNx2sFfbN_flVkRN13X=i$`!VcXaHaNoTLq3Uq!0uGcG7McTIcI7V7>og zki>x&)9L<3^W|aMZA<}cM}~8NrgK^mKr_8f6Xt8q{w%8j$)u%xU28L3T9rbsmAiA? z%IxrpY_@g7Mt#0wE^Bs5La|?(_vy=05@mtUxho}8XI`=ZEYl27CeRs}OMCMZ0E`J{ z*^BiY!1^QjE{4P8KdG3NX%)qE45<@{>QJXYlmtXXMM?(7`dv^xQS?ti|3 zPXnc&8tedYX3PPupdlx~PZ5TZ?Mt&^DfobVrhzel+3P=WsturtG(bw$>ZZ<5P!o{D zHM5rW&$kLa|E_|4oplO}o&QsFVu0iio1Y*#hhvk)@A|OZq7@b9rx@7z3hYBkd5ahj z+AUN3rN{d2Feh7v^GaLBc`iVkIKN)5R}6^PiCxSM>Ln*W0n{`C{+=lCzjf}SEHkVZ z1<)6%=CEC+Y0epBaw`C>|7-Z}Ptc2!{?8fB(Wy4*5}xc=24fKe}P zn{Qw$%?{3cwMofe@I7`aKxk^yNLveyvt(p8(z1?g@j-gxaiNTu-!KFSzC!_gC4$;VIpKC5f!tO&mc z-xE~5!4RH*B%2VtKrpf+>_$%$BvQwuIgzl<5u+-QV4;E^`U~C9;Y#$zzt179n`GXC zJW5YcCV{Bc){0i1r@cqv7HjtxFYXbDbbF31tej7^OiD>7b`_a$nJz9vGp}@QrzZ@8 zJ=I&rtgluL6@QMqLNB31&Uj;RX9IRquXn0U@c2n2W$uv&B1)oU8@>`kS8_YrsLL>R z`s%yrZZh4&{LiDV>$}YT3>QYc@w@Wz_uS6&C8*kc6)I@XcY~qycF&ATX6P26#W>oj z%+t=s9@!2qBH5Ts*DKa`xQM4k$4K2lN6?^NP>}7E6bRiI z#YD%nVnx3W$@Wnpb}~eI%vA8uBF#I&*Uml!yb{<2>w9CP@=b_*c#NA@o?pAs@+x|q z$&)~&1M^_00uiHDZ7SNUB@4gO$NAdh(KQKT=Qy$kcWUpeQF+ru5j=OBCoKpm9=v=z zF5`tOJdz~zuwPh@w##g1xu|YaLzYGIc!S8cJnG=l4Vcj5GUS#ByUmxz%38gPKA?~O z%GIURb{^WNl_U28ZHvx~iW-m+MNb^9kJiLA0i5J?~e=C+u zd^#(}OxdnL7IKs8#?zY7xupj0G#N_W7tq-9qsePU5^7bol_|OJX@Jzf0$AN@Z$aI9 zyuM|eM3xb&o9DQdk){H3do>@PZL;HA;V{JZ^WnKCjg9GweXjx)=tFLC1r36vo?>sj zv7Hr0Uk$Q1v>Ph+;BSk2s1({uka=8;oQ@Z+ zxIk(gIXZrG+Y}b+E5Z_b7a3e}R#BJHA3m;vuvmO2vE+Xx)rtciY!<}@ZBw1`VnD<$ zMCth5Y>^Dfc^O+jhIUkXN0Ghc_MHIWAgnarf^`N1W(n1FM7gw0ZKcSi_dLVZ>U)vr zk?{DDgzF{g@yut6GwL$6cq%q&hfW|JN$V5Dz46x2C*-ymVKxC5Mv(tU&<+VN5cFf+ zNdb~(z5;1FtkrXPs#Y{1(n@zOKHJ$msp%1DKC=nZNzt%(RTC(nlLZTp&DvNeAII>_ zO|Xr=G+Rpb;&uu?IPA9vU+QWNFB<&H+BFb=;4Iiqdrq7V>A|Qv&x13FW?|Nz%0~j% ze0)w_v*_v;acb(6yVe$9StcwnCHLNTCkKb%|0!{U7A!wN+ zvrTHG5i(sn_uaoNw3_276Zq!xI!^NXM!n)WRbpTdoSi$jK<@?>y8yxRldMwB_X>^R zFRi+H+o|{RnvR~2mQ-cW^;?u9w@U6ZS%%Crh5jHhFJ(l#jfK}j8~WqtA(y;6MEU95 z16y(<&MWl6>&+*HVuS1H6gObh;U{Nc zOn-Ne&R1lXprP56DQhZqpq`T)xxIqdOGlCqHE(9{4v3}N_Zq~Yn=#(=(n>iKfCkGf z6Y>yZxHY@7XEZ=ni>Im`>ISe{8ptq-M5Pya9N0+Vv)SIh`301)S)${PyX071rFumL z z-G2IlwB|f~#m$`AF41@;!OAgzz3jUS8yq=jr@Mr268YoD3v{3E&=$NRCfqwXG2;r@ zxGy(*fLM)6Y`0yWk(8?{tb&AIj=|HI^DqTJdch8}1b%VF z;W`adzrHkhba~E(!+PF^zK285{mO&lV3mUJMWf0c0EM>0W8`p3O2s(@;z=#oGWbJp zW5~RqO%{AnhyTRSQ8j$fw5!nQPPu@BkjX8@Z$F|Y+FcA%b~19js#@lV?2JQVM6>dz z?i@zB2s`I=Qn0HrE^(z_)nZ3>RSYXv6nt;iO_`jF8c+#0Tn)1dc^D@~XP}J7e}t!} z4!2SNS`GGTQ0t$yzWCZBnZ=UHA<}Z{zl@QyT2z(L&3eOuf<>X@;pShbKol_?o&zDmn{xLb<)pk zShnvX>mioB9~b&~HH07Ps}ts_#3_GY!bv@r@$6`LuylX~qjO8VY@YI(X*N*%_Jp2lJS5HzqSoM3u%k8{`{&S#Kf`!)( zP%FWbAzhW0EB}ankE>#`J<0LDfL4J~8~Zof*i}72+`)W~jOH?o2VmED5rAET8{Y5~ zQ(_^@Nd_8Ldgtpw`i^t0>69RU$2$TL&br<`sR_E_ERFAK(2ea+lXDx%{z>!=mq&=t zHO+J@hAk`#@qI2p<>;M+f_3XfZRBB{1ZGv?rYToi<_|`J0k-6%7JdEX)z=jUqfnmR z55@&oSMJ0U&<^yHiA3z3v`pUF4emvG%VtP7*?;utc^T=%_~S)%U|zCD5k1#aOHJRC zT6fHVLiEE#kvK5}6`X|kOXNO5nz_8&M; zw~nHOv}3c%kWk(iTJ>KvOAIz}O2F-q%Yp@SGEZ+0*5>V{e zhw8=M8t!(_X*G>3sj?GFUQqZN2s zRfQ+^vutoO7}K&EWeORf2AcIXag(Sgh;&aZM*DGb&k<S4@^oce93eS{>Bf>nE%_w7yNNQasrCZjouh1y#z8l{hX&`kpzuiH0^>fN- z-l;hzM)7kv_k+k~U)@D8({;|_*%PhA-EHlb`+Sl+VX9{z@+D}Q5#@FZB=p!H;0gO< z8X@vW(G2h1#g75c6Qc|`F<;AS>7gHLQgmKheUPu4rd|?g*Yw*gsTdT75WRM`$~FW) z#<}tc8cS<4P5gh@d+V^M+I??)5D^st=?(>@L6B}#TDn^WCB^~iWwo0fzCn-0|-HJZGQS`#RU}J@0j%Ke!fa&CHs6R^00wpMYEUCuUdLXBBiS zow;7!36rqhz>q}P_U-MGs8!blJ|iG|LO!%?M1>x7B6mGC41WlwspHPV#rghGbYq-x z>N-D%gxzx47oHAfJDh3)Nbl8LM5IvEGWN)1vtav_JgV3poG(LH3hVy-Acpc2J2{~* zFqyE)sU2|VQbVj8LOrc3#c1ZA^s5Fdd%N3#se}o|i+s}y1vn}0nn+g}4bqMZ_w~}; z&^o@~)PS!$bkr@nyHK97ZP;Nnd}{Rieg7_zbl6m+U&uO4oZcd=+c0M9f;%^FvsMtv z6X$F*k~Kg;Fgjx4Ldu?CF?_54#_S}=xd*+!EU9ijk;3-Ht660SL`24W7T#XARF`am zB0vuUre9NE+vH~nOJiGtHZrRMu}mb#AR2#ZP)N6NibsGIXCtmiHCje^k=QY&jgU$* zf2;vuZ@Ikz5DVK7urRr6Oo%y5|Fi^`lF@O<=Y1AmEWAg@hQF|ujKz7?(#9~c^+57d zgIXAWLFw|x(#g+taxvy5%vUnM)Q3P+kfI2EJC5f&E7;+mi1kQ4v1Hd8)P4p1A&yB%@X%U{|(LefsLhD%)o z3z;5V2W&&P8{$I>W>p?kbL#5TJ-GSZGA&!OyXjJ4LY&&`E~j1ffgp@ur5-<9QT#Av zDSF>AcEsO=G)X&2z70TShgp4ZxWsylqkhZov|?iAT6QYzP?VmWbhX^`=s~%e)YGDh zDb57d@xaUiM3=kcHDSfq+Lp`&VLec#X756M>uAo}vl@7G6+{nEF(KK!B>@f&YfEBB zOU4+5J!@!+?->Huf3<%4_FD(NHUS8Gtfq83we(zLTM)R_1v$Vf6{>6UY5Rk?RWCat%71zWZ}&j>xfwZ*M}ENt39E z6;f`<%i-wLe;}IBOGn`XlhJx?fdo7%f2g-$D0FO=>278nHdDrNxpW|9>5{_B2&NE_ z_A_-d=}izFj;O7QNf-sSM!^bR)GrK$I)I`0cfr*_TFMJ)U3Hn1+MxsbVcFbZn~LA9 z=ZR5IqFR&>WdJre)jy?Q^H-4Cf3uDaJ<&4HIqR9!$wljcUgOjUw=j&7#PF3*`4QRP z$;S5<_O#Y@Z=t(TTzm|}C=TyTTYFEP$6l9Y=nqMYuz_TZ8+2tkiqGzR39z3D1^gcW z*!IUU_~SbG<6ii`F&|W&)Hr4?-iWd~WZrOret?%+f)@|30XhI>#L&%$pK4x@qoxX% zFob|C@C9^Au>5Jc_&yXJG>)l-%se&%Yi?#e$4@W=VHoERyou&S*x5K{H%PT z`$prEn}GN2Mb-Cjs$&MDcUd%IN%H4~e`uHPqCLhqh4e0nr61vUsw<3rIYB6>gG;i4 z$UHxQd*$P1E{Z?a)Ps%s-Jijy*!j%u7wNjR$G;!6B9>H?j-FlKf5h{k zNt4O~yN0mu@h%{-FI+swWT-@&B(Pp1vI~`Im~6s79i-X-o2NB*Ddx&9uQw?Q6qnxP z8W#aL6}Ms|`G(w1z5~19*6~c(UF+`~gK|ArBXjdYMaf+~cHom> z$EWgoRomAvzG-Jc`X2sXgH0l8@bC(pl@+Pu8KN5##H3jMZ3E_X;M;0B`e0P0P~$_3 zfe_n|K5Y-&pz>XTGWJ`K-IoES2xP14wqwiP5!-4!F_i6U0Al&n=mRgYv@R29l>iuSsI`Fs)kJb3 z3!80wrMO1!a4E`CeH@a;pR~*sJVs1P8(DW96IU*aZ=$ar`{7f<(;$DjgMo{;9wtNi z+4_i*CrkpL9K^e%dk(&1N1o(*e(x9YB4P=IFk9W@^^GBlt)rVwo?xZnO&3`fe;T|j#OR;kf>THJodC92(C)X%W|fx}pQByM9wLkA|Dq@LFvtNT9%MrN z^i=x`_*J6V8TpWXxcNcZ`%sASXO;F-WM)ULzccy12`2m@>{7Pt$T-^;;`iL|i|X74 z_EKmDMYRcnMEd)Vaev+J6pkT2Q+$a51U8ICZdiD8j)*fD{dc zSr?&_4u|9`duysI#XA0VbuFvfd;?zqxW7-CJZ}DS}Mx>iJ zsylhPoX{94^`dFKEW2i+`n+kHUS}>M`hDrnpY91Taq*y{qHQ%VzZ6Qz>l|7OGBmV)s1xq zJ3cm5^ZM~%b+7KoAdrwQQ21KK+han?P2CRV(GwMR6M7pXR*(Qd?OTh->cEd z*rrW7UJBZwWTBO3@*7uC-{xa$T?Xs7QY;ZLtD-eIAGmI9>OeXER;(3<8Yr2?N37R= zkXz?dijm;MJuI`G>R901->Fr1^Y|g3td``G$}~YPJxVUZ_-lHY8&&MG(gyd6NYI@z z`43%+L#+o;uOw}{bZsSx zUSi8;cbo3IC7r3LNK##ou*=V0cM!qqKc?E@fMw;j3EJIxwnUns(pdg99qz=D`m{6Y zMmBc)7wa*Fv0`G%Kx6!jJ3bZ~%MgMMgSt_R+88o&p)E(A&kB9MZj`ubp34olS=S6> zidBSRfPB+Apy#P=@d}kMP0u_^-(@Lyglq6J+15og#Xm}^G`l=}>_Px;_B7@j+nv)a z4G!9O?&m(R(NV3waz`ED*$}TnNiIq^SRQ_2@_K>9&h@%RcaiG8UYQ(lbw=%orK3~< z!i-SIso9sSC2{x;oxVUKNha241w(jet8K^V8=JCC} z*bWCFSCf`-Ps`^LW;c zkmX->G8_Y1i$@Z{g{!c^v#D3+egB{1Aca`$AT|>m`@vAr6SgP4blZbqTPY>a&S32x1y@WsT0riSvXBHcK( zFrTHr=FmjO(4OknJ%%;S2|b2`U7pLJbM5M14q8}YfMo_#O(8&VM*bo=8|nO;4gbT? z#QRZ1&|dj^0HQjHdDXno#MEhasIdn{oDqYP0b}{We2bNbj&_h!m%)_YP?qZ5d zDR@jYi|vhhfDRS&g`WK6CQxg>j2=7z)QE;Kr_K!f6=;K>AQ>}s0w4?=Qm%?}F5RX+ z3o7G=aYRX4G+0g6urEEi*G-Bwk#vAJOH(SkRa(yI_R#n>?Te`|UV!vyv?dbM3K%L# zN7mUN7s>_k-*0#u^_06*S0CtHH=n)zQIJ*HB)g1fc4rH}(4@njUuip;YvOTC4>l+o zjck|vFqu_Xb#Q--sO7Aqg}Kt#VSLWI^L~`3jKKJo2Q0AmMM48)!d39c=*Y3;2WQ;r z5gY_ZR0Api3^XnC+cLmsV+OrzN~gN7wDC{y(MY4`2gEjpI`GVgaYgzMYL-XMjLtP5 z;iVJ|Gu6U<1q;gP7nhAG;}SGqIhOPge_Edhn*T zo_v(@DVr$IM9OsaVtX$u=5Jva- zUFB~LzPmkcSlSwhmisancgx|F@fp@;mp@VMLc;}!wFnWIX`QRz@98Zn1GzhQDDoYe zuCzw8tP*)WBm)(y{PbI+iTL?5kJXR%D0N(GSx1l7uP1e z)MYa=YYbUUZMwR_oZ7=KeRW=1i;7+bRyq~HzfVQpfXL(~dk16Wa;5Vy$(AX-e!x{5 z$#Z-BCIwRi!b^4$w1@hQF0vk$PHz#*NU>celX3ItcIK&goVzjaM1aYaee!@Ml*@~k zEZf*%VZM2y<++AICtbg}#4Uzg;0ugvl2^=j>EvjnP;y?1;Po1DWHEkpexu+Xj`Kh8 zsMgMzSc=jOQki8MW`6U$fJQ^!WxZ?!OZJVS&7uwbNA)6;-HeV}VyjrC1aHrVp%XAG z^08&53U%C9NbFvHA!mhk{1+bNU-^*#<)sdtk&eGy<)ulx(pKHW`=FHcLbA^bJk=ix z8^=$?6sA`O(`Id=;J%{N9HKu#hOlw?=o+!Ot11z5__d6*4@YC=z4yCFzHQtNHHgX} z1#PnjMi+4v6bfiFi9t7+HdPo9BTxuFD|3WY+s(KvwRtTE#-f`l`j+o z{TDx?4o^*f)&iW@{-*ScTg7q8`apGo>+H(27np97G-ndjR!sB5+@vs1w*Y-{L^DCt z5kS>EWKg?UpWJlrO85!FTldB#g|_nSmrwkAF5Vn<=NZhq< z7p=P20dmm3oHQ}g#Pp6Y9PQ{OB;TZ0NinquVxi@hbuhi|O>6=-O;D+vEakC?Hr?Zr zeGxdOxmFY|S_Y8@vwOXe+&D@Gu?o->s05tXdj zPY#vlVx~T3r*dx z*tI8jtU387w8t7?Ly@7~@FqpOEM;1**98l63XPqDf`c58t=MTFXYzpMc`4KSq#@R_ zU7k$!F~fx?ucEcjdh%D)5=}cjsZ&x;M;5Z`ca^)7pQI z)_>RM{uj4kNHt7027ru7nw|&nV3KPzyEst0B2ZdA-o5=3q*L8kR~f8SQ$+pjj?l`C zM-oyJcuu>8D7pu@yO&C~h8;6N$oSj~ou2=k=pWOYon$N5jtS9I?gE11IF z>G0?T{eFAfbGDG%_O^cuvxDCH&Ei`qn#0L*0bsa*Dj{!Wo0AZBW*n zYDGMaM?~kFhT>8q)k!COmFnFm@B_^S>Gg%zA9?ybc3DWVI^7;G3kC3}<665=Aj;z$ zuZvlzRrSrPs=dq6Q(lpHQT{=CJcrA^=u=HK<6Xtr7D91u6~+A@7{yZ{@;3bmnt;>S zzw)v?z;c2$H&(+&OVJxiL((y~7WaIoh1)|x$vYZuV%>&J(%1AC5fOlRzj!BAc)H`a z;xu94VV3>@P@?8ezrfa&3X!B&N+Ro;uMI}axVu_Oj$hYHXnFg3Ji2?K*jcBzead!1`%UVjoKO@-EVB?hdB$C+ToEpBy&Pb?`Mp+%wN@8uIYrKD40pP;hP#K!3XCz``PTVP6ilJ>$3zdKKu>1fK9s%^os)X_=z@UO#7CLu++{*AcLws0w$2^syf28TsN_C$QYw> z_hTUty@#)4td!a_+;z6Y1Jg6uRW2GVAVKA)(CJlT)k+I<8h31@H2ue4MX(5z889xO z>R&z^bwvp-thkgO+YMMmd!2Kv{{%HhLJikiD1aL~4T?Nn`3aJv29+Z7B-a>>N*X|RzTv()Un3T@vHeaQ{6Erx#JkO=0w73iQdw(vDvwWp0@togKuv`f7 zq7k7a%f`SbLigAxDZi7DK;Iydaia9c9!)T$WRKc_6=(B+hj}(cxlBJXSsGC@_>uRwR2W49T%{#io&&t;RTeJ|8VRP7g#( zG`zpw+8VjMw>SZ*FjB5Y{V%wB(uW!@J?lT{YAjnY?%W`1WHTGEuCf-O6+c)!_yxS(hd1#=JAQRI5uUl)0(CA?Bt*{o%`}zQtQVw2xE8Q5?(E z`mmF=`LtTB``{U`p2zvGrm6ap_)Sz9K-*^*535yLQk@;9$SF!%8}8M!_EZ2uW7G3) zK^^;Q58~D<^5h?hncgZhzeAX*2~I#oLm_$dDAS8QGj0T(f_PMQ4Jq3OcC3M^RP(;` zUfAQUL#s2`@^nWOmR9seMlSS zVJJV&GupmbC|@_W7KB?{l<3g!)NI+$K{h2}<)=77&mfYEzKp#0&blA#r%}S+aycpN zlD@}J#$pI!iC^jR=#2gPBH7~Q3X3qKx&NH*hckNScsfVQOkuYC%iWIv{5S|$U3*4yw*e;+w#AZtg4sFG(_)i3BVHc!vBL@`}zUGgnTnPZ0g0mj4NklIrC zIGQvA<#V8M+uxp1M?xMu)C|{5I*r~)CyB`|P;HK!<|xL)IRkIp7!_~u&{J8AGE?pz z`OH)K zf5R5+p401}HM4hld_A2q2-5yhhV!3nBLMd>^91=xT;EiYDRhfco2;I3+I^(x>HIF4 zTp+~P%t4<3NB^J)9fkxb-~j7cv|^cyt^6(d_5@^t%buA#dtG?DH3FnPc^pFW6=|iq zwR_(Hi($dEHO~qz@YMc$!FupIAa|X>uwZ)=<6N4v{Ro*PYW%|IKuIZm-u8nYO?FNd zSkQyk_tSXU>$s{p!p4hqpe`aVf_p02T3{`5PR*YthwGkBUG|v(zMETdA4BDU!2WH${Mk&1xd6EEM^^NmazJmX#xyeSeCo~ox z-Lm4%dad-YC>c_{G_+P@>w5_TJbbzB-ZDG!d{1`+0s>rNCJDewr=VOk_Ui)ImyrB% zaeHrxzVH&g7nMS_0|U5gh6~sSke=XO+frxBXQHM4=dL%iu^PfxY8m~d>AcBQkMzd* zH=eajK)4Jr+ry%nc6lnIKQyW;Dufb*)MagNJgZb9Bk*u=TleMCFoybzJ42SimO@$v zh70_=w1Qdo>wLZh?^M-%{9#V64c&=(m8F<$OPj$FHPK9|#0Kdd`fd4WJ~(6T6tpI`H??Jqnc#-r-uqiyCqJa==E-`zfH*J4JKx&^ z2$BX}_(3{c<{0VP#giW_nD;ijnsBLAJV1!o^zd(u@a2X7osICJDJM69U2^2}x;F}Y zB-R$&`2|v+%j8zH5A@!@jWIEJ3xm)TSHL<&LY<6#@z}m!;cl3u73TF6TOS(q^Y0gb z^y+sP6#eM%$Wu>(qT`ZKgc^vu;V^}0{}PI`JcTwof{`0g>LJBiC+CquRo=CNy+)%$ zZ-P=P{2Pxf~`&cKjWkAA~xgx8)`ATUadc z#|Q9wR<<;}p4EM@p{J-hq8Jt}akzOjG#Z`aVG+9JJWMc+dfz7oUKBATC%khYIb;43 zt+W73Wbk#$x!aULaYmsrC;!fNkLBp$$2^x}UIy1Rf6`nkn*=Q-okDI!MI`BD%N3M_ zqN}tJC=Y)dk@cIKrA~7RQ=;FB8TFDmsb##8xblF*)9>)^-2Cd3z0k&D|Cj~TrERJOLvh91>P><*r@d& zToe|S5FQ_=zBO_p zCX_Yr&40OM_^*~4|C7(j+5^VuV?RN4&^Tpa&izX-2&jURs^8$MMMom?ulxa;5Fls^ z(f!@Jc8|GXD@Cu50`O${*D4!jdO(Kt(f23F@gixv`2VQg`tof8T^WwN8UPpRCWQnn zaoE?vf4^q+yaMQ7-Td`LgF!GQS2i|*e>QLD%x)J4v!f zp@;L93BND3c3c4U7>=g%k3dxhy9@KXY#-m7su-pAY^n^7)T}&Oe@0a7URI~B*iGs; zk(3EY#H_SKkAbp5ZwM4sbo%Z*g5Bv8wFlH>n2;o1w^&yzOxLG%)s2koN9%r>IFNQp zm!m%e70Xb-@VW=c3OC7qf?|mQw`85#(^C3jK!4>i;ZEq))Cxx4Pp|s7EgTqh$=FQ| z<7alx%kvXNW@IQAmcJ<&0nVE}zuOcjXRsBehiRgmH=rZit zOO&6XlZG^v9zE3v_a6Q7#?$dPNlz1Eloe(^U&P3%=vR$aH%n5ztOWGjvp&Ci7KkH# zRS$ARO(FnvucsB~?$`qS=J)r!A0nA=7fFCbga{KX}Cpx4JmRIP-!<@ zw)3XiBOJJ*%^7Z@tV38j5Q@yJv9B4Va?SizzSL9*J~A+peIs*YK{(Ew@q|XYN5Cc& z6u_u#B!iQ}`z$(dI$H-3=J)mMrexgkd5Z|3kE2EafT6X2g3z`NHmuk}YEhc?l`#@7 zi)sslQQR|03VoM9G=@zlE+q&R$8-6nw_J29>C15%?nwoc`GkVlX^g~-D7})Etsu*D zMivT04(=@2^@Rl)He+|^qJl-ur37(+>Y+f)B&*6v{;%N(xkThChU8P zIrZb5c`?p-9E#T}J`(ta(E9no%ZdWji$LlC&x6e><0eZbtRaSqn=3Q%I|Q(92;{9T ziXFKObvoTQc``Ehly+2TL6MX%y+DAA6zfUPn|V^KSi%6b6hu_*MnIk{|63~N$T5M}Om&R!z=ppoYnE&| zwMrTLYs5f$ufH}AvXqlB5;;CCrDg>cja$QoNN(j<*2Zg$E`)DHbXDh!F?p-5%m9+xz^tZ=I|( z^Kx>o)C;j$Uz?+^4E}14Iw=PHM(uVkd8)7ScL3FTt0AvNHdjkz_{2w}Hk$yhrATHL zd;JMkmZ@Jz+kX@j|7Y7mJ>9so5BKgxY7e=5%MA1h%mt>QP&^|~pkHqJp0zf0a4o~6 zXJVx=M_JiwkA8`3*RpYRUOydRBH(@%gcPl#AehL(Op{Y*YSlJs7>V`m9p#)cj6m9 zgiSVSSmR`eE1t5pxX>pea(tYptbFLko20VK5?@p7J@RTtSARI+w8H*|=5fw7)1TWG z{She8?0`Ac<#8lSPo|^;J*_im$XA^lL*c5%Z#;!NrRpg+P#Zg*$G_WtM0QVAfl~4_ zZsd9?!tH67DOH-47jN-&DxW-e&D7>%1ItK($Q4J)D5tYBMDiKjnXI`XuK#fleV4Q} zjkX7&A!wVI$oY=`7F?|!4&FZxh9q#mXcM&F=>&DZGp>J*WmxsU2 z9@|^zt5PPi&StsHrevXYu;(XqQ?NES>WxT1vchE;+j-99WY(d9sVe{XIiMtdz}g^?Z; z*D8(kn?$5+^6!!qYvng4QV||$T!8QSIbr^lqs_7C(J(Gh6hc9>awth(& zbGVJ?#DbOgk&$KF`t1W_@M;IKI>yqbJ9UZZf-OBK6@ZNs`ZH{n5FdR3L5T+Ywe8-+S|q@~&=-j{leE@o<#l{F#w$c*7wO6wEwoR@7gdw`9=Ax#ygafGU)z}*ZvPZcYIep%{c1(&j24FS> zE4Yr&$n|QL7cOS$U$OOlk)ZU{wM{JpCpGC^8-n+mQYGGrCBF$o0#wYGfQEfh=voO{ zrr1VKnFCD8G{H4p+VQ-n?>Rkhq@p-R$5Rf@T~^M+elYIgV_6a?D`Znq@OsB*QmI80~C#FVwB-Bh7>-HE80m51aqu z@>*nRQCqfwciGYdr#TWLkgD25=ztvs;)(YB)EUKQ)))?pA-(}g`dSdy8YgRWLT+{IgC_| zDTSTwjij%hK=BqaiywR>bsX`m@|sjrGxYgriA~^2o5mo~Ky5)Zw^Y*{5qcBbTmjHJ zE@pDJVb&&44*^{Vj8~&}t)yxKKM~8N_Z=yN1PjhYhM>1Q_E#CEFEE)o8>?1hZda(b zDme;YUa<2u^E0_<0Wv=b}J(eq}9^p8QMCaLC3(5_1ur{{j-DqXWp-$##$dE!D;XnI%a1qLoacW-hU3Cw^YtOzgVRX zkF`}`;-D21<=vhc&WC{vV6e#Ub`JeOV%GMkueuBXws_Ud?gABiKBcH3V~11XW75Ri znK(2(!8$ymj?42B+ZZXy=0`v+T|ii`$!>=a78FE?>k1_h8JZZbi!dd=r}dw(EOpYD zZ6mEzrsd6vabzm;HQV-_aHV#Rm<_70s!ja*)%w0J(-A3FKz7{(S_@h<8$MH_d`&U@ z7Uhc=HZP!rz&Ja6fsVbBi-QHXTph-oydiTC z>uV969wcyIVF?0wFPGuindt;SvVbn`JH;zz+7oqIrv>_o@!ggwi2(Fe3)qjd@*4xE z#8)ch4?{g_=eR%r3g54Moh9{eNkH#_a+qO*>*jj>Sb*qi@gBK?ohf3VSl)$q%4KNyKJYz+?f^)!nX~pnpw#}I#!Tydzo@0&doyW%lTp&wp(QwiA z$OKA;Ce4`qGowth?#(Su#9&$w;{$$V;_J6(*n61#ld_BdS_TJnw2e5n3?m{dC^w_9 zF^s!cj)pDdRf48;)p88tVe74v^%(t9u`sY_9lG)J{Uq^?x51Q_n>Nzd3UmPJaL}$a z-aooTWmaE|%7PyEsj0!E*d(M3y-7jCzn0=T$5+Clfj&KY|$!+tseF!&Cn^v;8vdv#f<2tItGDK8*`4#27ifkj;2fxwvWEjd{I@;s|H^v3i6`a>_SNtHOt6r)DTaiXA(bvGNnh}ZNwme7Qx9nV2BK$|-fHH`6g=D9>cyQ5I@UY0*W zk_`!00Qb)D$_09~3p}ovg09Q}QC=BdwA8?v2L~WFP6vpMUu>J50ae_H-|nFd*jX!b zm0|me;o=scUuoGmY6|p^Z~h1eG`puX0h#b1v=yM5?F#67BLRKy4h+BdU%rO%CrA(A z1&ln05*|q`gaWeJe?J;|G*injfo~IK42Iz#8ws4gbi0dl+>2Iu%zJYbG;!}(570Ch z12oM~08R6ypCIbrjngC@eF>fVCN+5>bxZ>2j1RkJH8Ej`7O4Qm?lM5o97>29{ryNU zc7S2eVp?P{<-lthpD_&Up3VQ^QIntgbp#hr4hc_uqOPv@{3>+MZ3_)0J8)7q6U^AyAyKpHas%=(oy-1(4+I?+Wowc zB?)^!62k$UBUcV7uJlZSl-2Q(osN<4)#}Ila5Wqr4lSi+0D~sP4f5$ zb>m&(aMy=3D9U%1^2el8M-v!9()r%LrMcn5*P6b^f4mE|P&PZtV7*Z%fms(7YMV(?XQI3Hh&V;*C6x$xQjmzNO+ro6Rp9IJ8 z?AeLV!asP;NK~1uhrMu`7kPmhZJ402f1S21y(hC<=S(W6XEB+%vz+YFigbJu%8`E1 zXd?f8P}4{(Aq`eAnPlP;isx*t{zIV4kaq)#8D9yJNKzZOYh&UM0VqdqKUj3J+?! z)w#sx3#w-Ld5~O;iKtarzZcC~_)*Vu7h2b07B)5fP|y$3crLMP zG2vwk!Sng=%Cf0sy9~F)z!+B%Z@ey{i>TnTY7==`YVWQlJl?M9WX2SNP^naBvMzmS zujwR!K%t{JB14Sc*e$3a3b&}&DT_s4tAI6oUQ!<^)?NywUS&FQow+A_wLDG>l2YY( zQ)d>u2)@({Y-wA&D%fU}UB4QcRbxS<%xW}{Q{Q$beZ6r`H%Yc(coko@L?i6=q=jk1 zE+^EgYprQVJcKA1_Z8L$q+uG*)LL!Rn3Jaw>Ue!ZSsV~qq1RU8bBSNE-(+)~b%2xg zmdQK)K-s$R>m|+G@S+TVf>i6roAd-r!4iFK&R}P^gge;p=oHr8L2n-y*~blRk2>vK zJjx1PX_76p(ZjG*@3+Ldybb@DoZOhBZo)Klt;~bib3C^4VVsbrt|46nnSZvMErfcf zPvFY%;-Mxr&DMu(iD2RD&=Vw4WbO(fu}!?jcCCkPTwa~;Rmgi~{<>bV{dc$-xdT&A zR^<3>$3~sSk~l)f$^>`_P(uuarZ!sJfBt^=ju#lBMs^3WA=7J;j%5Lh~)i46tCz9F-g z(V)X@E>;lx;6hP5B0b`yS%Yb^?0Aqt?ei>S#fz9h#>)XST#$#xa7Ih1wOBK;DyETc zmn=Ek+R>VP*p55jy@2g)_WT!XQuDm;0IGWf7C}zl6a#tMLmd|$*#$J?zU_|UzN}>s zlo;lpwSN1Ity>eEyl-h~Faj1#pk9MVGbElS>yIh&wQ|$s5tcfd5;Qj*NCnvKl*8DP zthg;;6sKjx@;XceIZl!%e)(?lLkFi-=D#fIf=RyB24F@K>f)bo1VWnp5V?b!V{;3_ zRxH93aVgM2`eNA^J8HNn(?fS2yAzxm`DE{%1&g4l#uNgMn9A|Zu%m)n5rto zM)!Cs%COAI?yf0H9qj7!u(h@|a%wmb&p_~N2Dh|*c)dig#FnS(x7+|5^r5nm*RX8X zmuk<(W@wv>tC_W6t*dTPw}rY3FYQM{vMyAB)DrFZ#>DX=otg1Ac_%M0H!EtY%IWog z?Gh5>h*iTM7Q&lZm{(JCCF&kpbe4HoXoYC5Y0Q0EXz=4wd}gNkog1?gnuAAwOfC30 z3|v)YljOrSkj!c8m2vrrRtV5OM$rIpdV~+bmqs8cis}?hdsVv!v8$15<)!6bU=xu9 zGi$SsJEi1u@Q=+-p`$di>UpS;v0IO}k|4ROE(z2h7Sv{0nC@mUJa+8%X@L!IyB@gd zaGj_;?mF5IefX@Q3K%iE)BL?Qfp+*tsI4`xR(qel>gA3H|7UbggE0 zHflJ&LyI@yBUeB1Dyzw`Xcqhiq=&$y#%ZVd^|t_w<6vmKiaRQ)6S{TGd1bh2Sd# zyqhy_PR7S{`|Jv3WhZtak7e-(dh@pg+g?X7-(yZZ$MXID-D7-%&&{P_3k;N&B?uaR zL7j%&P>DU^dHbk}Z;`{hW>!ZYM-h$o7wftE>A^;w7u4G-)~CjO2j@PvKS2%lDc_;& z%~#Pcuf8B^qcJJ_=Qm6S&mO0k3nLe)a>t$e@d?!~xjsEvn8cGDTlI^Lap+9?lt`b#{7@mTU)oJ5!s+ zd!LxMjSktfBLI$TYT1V;*wc#2mAkXdz{G;5?=ZB4bW?8lI--%|PGPCkeeK@Va3ovK zvkGIo>+a^Z4*KErBFkAMknN!qU3&iIPv+&rc9KIEl#X6c();QrIMEW0&Y82{G>uMD zNzwV&ODyBD;f-S8;IiFU^#QfA?6`6#*DdtIt!fkWqJ`d<<|{+!G1i6%j9a3i8=ghZ zi*RC9cLAiRzShUNR={{U4rZfrG{D)_EUf&$o4Xs_Ql|%12w(h65 zp5ACIqGMs0(UY6*IgL80W3&`=Lyq5QKyvuIzO-}D>VN#K^6ikzswLGAgN+Vq-Y=~c zg3!&g!$e!H0qBBAx2j|Gh*Ou1JoQszS<3eN0UDdQN_BO1y#Z(pT)>Sv@mV(Uk+ski|?${hg>|o%diL4lBRM35Y=SkM$n!nhLc`soTI?s{^x(F4>W#0{KBvq z0&sj?ADF zX@<`!V#>>ssbbJ_^Arf;!X6yp&xcv>he7XCTD)9W9CCd`CmGm)V4sN5Rp=duEIbEJ zl>aY*?>}9}5eI7l3k?WW|2um3;oBSD^PLLzq*jmn%|Wr_l5xs9D60ffl>w$s(|dL1 zRBU8jn{G3{eeWCHsk&T`wMDIKX2$QBDA|X<%#Bo2m@4NKC(K{4p_VKbK(KdkRg>rD zeOqxEj}#vAMkbC5T$W08C7tFA1vDwVm@s9!Ya+g9QtaQzg1!H+St*p3K50jInX$fk zIBa-mT@_aL6ZC4fP3Tm&5W1f7wxagPKzMk7kJnhrACNWWzSIic=#adpQLwGTZ+Ci) zT;MGhkkCY4fyWCnyJ|gSRjbWa+PcqVS(G&DXCOn&`EjmO1-whzPhQBRIFf@~i`;S_+Vn^TZy+Va7D;IL*DZ6LY67_w1B68c&e`Ey_T(T$M zr5x4)c?1(R?0z`yLcX9nKM&}{z9v0?hOC*b*HN3JS}jOM#fVl+*rm- z_Iy?s&o{3TSKkdT-fuUkPg@^vZ=DX6*0JVV2~?DSr{fN?hc}3_u^Pl_5pT;4f`Rmr zDGYuxYZQ|EWdcoBVskT6hmKR%YW6Cl^B^hxE4A!ibgoJ10HL*=K)*8Bk~tZ1b66Vi zct%=f${SUP{sc9i+Uu$f253!|`pjl8AXX%u2tHpFk~xG{UEkS@dN_)fD7NvzZ*6L- ztgD@545Tc{j^``C^rgH@DE0VFLpnBIza74rjc7o?fRLV$mWf1taER#swG`U#_9g*j zDb_iA&N^LvImU+bix{l6hT356@+R}l?E^Qx&a8#D*Tluv8}4%;8aey(XpX% zYS^yQT=(WvR)l+2On_W0I-{)kDzQ+po($7VOI1g;qa`8l1J(>>A}=o}f(NLYxxkM$ z-JQ#WZ8Hz%PwN)Q4%lFI)e1y6(p&?zhUDy{x|;=Wd0o1#AFf6XsLwJR^Q}5DRTQ!U z`lDhfuA6tWe$9`=LfIzL;3qDT4ih@w!7I(18gaQ#wU!YWkbN&p5$tb$L%`e${jEx z1{dS`@eRAlG!4P1XHO-3sQ{|NN%758<+gWcGPJK8O$ zlMV_rjdn>)Ku8+yR)SKs)3UcrOh{P?^aICrM~gkcj-SSMxy@10W%CkE{dQa>c(f%S zn4(jBJ%~BDr?-7@;V_M>2+u-mp|cy?Y@Bx$nQa%{|3CKLJ1(j&%@!^M1(hHOl0gL| zi;^=ck_9A(B1nz}LXk5lAUO#rIp-)8IY|ye5sRFv$T?F)@9}-7=k42bXTD$0%r||z zZ~t*B9CpXRqg3Ysnhh?do0;+v6EhA8W`noRTL<=8BeDuv39KcAJhhH4ZkiTiKoskFO`8plQS zAMuw4oI`fL=317`O(gO`UP)>C-Sd^Eh5E*B;h=&;lps=VUIO)q7@%_;E;fzBnmWj$ zrDwe-;fiiMd6_Y<>oMD_l~tnA)l#B6!bXeWO{j@y?o|%oM}{p?9UoLcry@z$dcH-r z*2`QgF?^HSCjZx_$PxKqm*p;ukp0k$@jT%79OD=Y+z-|kaW-Qpny5_0JEnAJC*9)g zw^_D+7bb&s!HAV=Id~yFzvp+A+7ko69++I-7OHl2=@eWQDRc_<$Z|Kc0vE%DYlj_! zt_ddAPvQ>Qk>6>Pt)Wz*K9@c4hf!1Ij`h-xinudVfO{Zg<=4k&J5Q~nnMnbji$-XD z`Fe-jHp$Zj)4k5#rze|U%_B?`uex5r{9bg=o4$T0c_%r|p@gJc})RGao} z-&B1ObOL-kGwLo8{!239qf*I?{H}uM{>Cw(u_NQ>7Xw0F6x5xi`0@GLno^$ZALpzz z2W~x)q~%=LPf51)Pf8Hk{_$RX4}oNs1Hqv=;=qlpXU+{Zp^>H}(*};El6(8s3Ia1n zHE5Z6nv|~7(3h3R9b3mP)>QeJic{hj2t}=HQT(V|{d5eub>;xB7ZIF(8~SNAWq;Kx(c(xjV!)@XHxJ(S`$(~9J|`GqX;{A8Y~UwnjPC~TNCsN*0kH3 zDEFWG=Yw@6igd$PcHX{G@kj&FW72e`9B(+N>%nyg#>1Q}B*&dM0StIp_}jVrEVvbM zOwEhXN1( zXW}491X9j zG3Av^y-F)}a{b8=;;*gCpU9}jSar?h8f&cFzB25=8g$F>qwR zVB=^hTnK$hK1FE7^A^mRaq1$MD&Bq4z+!HQ-mr^}O@dICpY(Ds%Ev7VZTLAWZsqg3 zY-XjxaBwgZc~@H7>T6~I!*Yt`oy-k+36p%y4W+Ma9FUSIg}X0VYG^-k??|!2w$m+v zYxAXqIz+OAW|8J#R(s}F61ggw?;1FTYgf+9+bpkbRde>c;{&^<+n6O@86oKh^?Dt_ zEZIA`x$6bR-kuQ4qW3wju^8vZqdktc?A237DU8+A_5B(*?KV@sWGwb>19UN+k8>i! zBOU5f_Lgo&nljeIJ&q^Yk94SR4!md0Dj(y%lGm-gxB$b%19=bGe7$OfRr49kv(`ra z6y)q@iIuqkX39H2_$|igrcz|3s`{z==`T%NVU-rZ+EamC)1B~pR#kmfW8gq#O|VNf z$^88Qa%xE_yBAwMc|{k+&Qj$y7b0_oMz(yHhY5>>&5xlPQNx-D=}BF(-|R>;LelIk zbkj%G9u=*dUTo1RSMYu9n!G$d8j`jgyKUG(q6dHbZPf74+Ge3Pl*-oc1?uots!02T zvdZ!MQn+%bzh5^(!2>4@83mJ;P86RZ3}M@_VGi|Q-jj4XbA_mW&OV*tqT*v}Ud^Y* zj&+IKEFM0EQAt#|DFXTSGP((`!eshl*%j)7GmVPY-4g38&SR6O)B4yOHXkW%ytg>5 z%Sq1Jy70Uz23C*%;uEW|X)nOw1-1ldBvJ*jS<%~+gmM<;K8!F@!!ONYudHUzo5Fbv zrHcMKZux8@YZY{%Qu7%+Not-~TmspBsI_et?4rNZb3Z`kh%UI;wb(-2A%k^Wp%)>! zM1NRtAAMhT&63lm$5JojEAEiatBRK_a`!_EGjFNWw5zN~6RSeAqQTu!a?8@auVgBD zM?v*E0&szG{B z8{F6dn19{@5_Zqjly($v)F$8AL7sNFeO^aGgp@{fG5iLmRGmX|8L5R%1^RinwwP4x zL*k7JGipnzHmd8y0wu;i&b^tp2oRYT_b6&1^ut*PS8G&NJ(xFdtPPkW^f69&5zMvY zlCg0@VD?n)=}%DZrGR0uc{J}QS?V_?&e+&~BG7d;wqsAOw^69yZ3vsn27cKPB9pt9 zA$lECOc}m6NqW(zs-u<3PkJGosO~tqiF9Z=m}L^n!1p8zHpat;j(5)%8E{xDQZ@-O z+*|TZLWwqR=nOmYh0`bpij2W7*{{tsBn;6J8a2Zz3J-ej_c(i!loQ=}U!)-YooYe( zWs!~|inaeCy}oHxWw;?)=(K_`bd{{-DECLHAqg}8p#$$)iK70PqmUvc+NKq;YLdJd zzfE{%>AbBIF5P{J$MMloUheIfhXmd+Jdq@OKVa60WlOK78&jkX$ljIP5-=!E>1vN( zT01%bsGZMmXp7F{O}7!NCeEr0g9C>P6L+42m{aSwDK)E6^E}N97t{TUtrU>$qdCav z-mWXJ|A5TaEf&=gC%*i^&vZYg*3pHDpEdXvb=6Oh5JlOsCx?!Bq6_&k=K7pZmMK9P zBN452vKF&C16foIy{*BAbbucL$43t>4_o+@HOCaI*@4m7y=N%O_bwtGZZru8ec4ko zfG)$Lq*DvI;Vl>9OddMf8dqfvh^_5+u~C==3Dsd61QTw@B7tkEwG`!7;5!pDs76Ml zq>~cmDsmdS%lM�&)^JNWjuLr|K-;Dbr=su-fP_PU?gR4ujeev+NMQ2`A>F@iEI6 z{Tx$Qo3&oULuAR4dQz2Sd)>-&SHH>|-kj8Ac6FFE)@#g-SOo6Gk69zoC-GNpmj&8? zPJ;T)zWndbC+?pi)<0~BMXvW~5aJ<|C6hbLm1amyAG(-nH{@?WVwu9Wr*lJB_aUP4 zQyI+;#xk7ZsECF@kjuv>a8Ba-Ka*gX#u&Pyj#r*Q9B85(v{x#w3Q|3u)9`~qKPU;+K zzny9Hs29&(NtnT+T(P)(S#?=snUXN+gP~6|oKd6UwWr06?yWx~_EWSL+|EyOHPEal+PD8OW zHEMWzQ*#S8K3U&&aA(cGMW28LXpHo{mDkDX3?a6nU{ZZS@?z_zV3kzrlf*iHZ^HJ3 z)riXE8>i6q9_!O+@zgjC>^qdB1hV-^c}ltAOZ`R0k)usHN%v0tcm68x@6KZ;%Kn*mnh$LBQl+A*|}b$-BhhXtK=jXGT~= zGW_&atmG22Yi`^H9g-XgJx_{P^Wv)_(Gf_a_4DW=K=d-9pSt28f-be8ExqM9??1Gl z1`u!oZ(i7~#&kV#NPZV-^!j`MjAX7zN~%yAWcs(gor*jw^@P%(Qnn>`wnQsAVgsUD zlq-7=H-BSsUZy%TGt@m_d(LTl<+v6xBtcTXO5^?X1(Lkx@i0dqF^fG@&gckYy$b1wI@X*#$hHwD7#ok7OC=L{MTDMP+EK-k&|kn zf7}iJQ1`wi`)QiGq~u0*`kZUQxwC(RMvi9kooUUo2ua-UPap7IC_fUI68lJ_ zD2(X(*dXS-2*2C(Wu$#1h`~GytBf*;HOUwLW?xJ49B}B}8<~icN0@{veC4}XZ9uulMnkG2()v}k*i0QZe zt0kgIxbxz~_Odu-Ka~nOhz`qN?5UN!2LoWdhhvB~nVp*uUp@=mH7nUIam~uu8fMAX zwy{f^ojsPRDn6VZM4HKCTyjo1(A*h+jii!-UPXJrxu`_1Yx|v>h!f7e@2r2bwCkhZ zjm9eEjP(fTe7VX<5@BV4n&wAU$jgbROg%5tf#Wtg2@$1jGwzpsn`H=}qEAuc?)4G& zb=u)yLRAkZJ*ehtA5{>}p9k;EugK}dw}USd#qj7K#Bg)!>vUO~=;al;((RR00lp*Y zV045;qC$;Rgcmu3LV3;7v%&-$cDj_0nH1%ptKQ^WEJWaGYfIQ9@FbG8w3puFBof-a zn7p*QkL1an-tpkESe3(xSHdw4D(k(Iv~a@XGyw04{8DT-TB*ezbH}FPEAAGyXYF)C zGm;&DRik-`vD!t=n&W1B#;cdCe$Q&KQ%hJV8myy+Zr7+Qf8K3gf{H%hJBdk~G_Q73 z3+0V}X8G)KKhaGmtc3s(u;}qHx?r_!PkS2Ai?~)j{)%R%-Nb^HY!O?p5uLpcjLT9@ z?U$B&nc}V_ZR9ebf~3f21U&j3G}O(bFgz~x;inMfDEEptyUph>NzcAxQ$@4}Bs4}! zW|^kL+0_8`DN8#yf5ev*mc7}|sKvhR2ewt8Qe@afe?(`#*9a#=z{4t!#)RD$-I-0| z_)WtXWR^t{CM^v1LNtL>x%GR^!8~D7vdv>K5n)*Dl>7jM5A9XgSt_}4hS!k?6Nz586qN>>8wroSvxLY4z3HyzX|uP>=<>ZpJid< z?kO^z>%;+ZJ$a_|iLNxipAV?kaOm?V<-3UU*2?u!*pWfJ}qY|4aiN&8E;=}2+ZDb=SFARa2G#5 zl=ee*TyDBW+oyG&u&uSVxc)UUfwQ9yoy>_yUF6-u}lO z`(ZiF1(r_ZZ{iLl=k9g}oh4jP;ePkzIpWqe1 zatrn+h14Yz6%trv!IojiL-S1+7EsHsSHli{fhleBu*#?rBeKW0Ys`dwzn5^ePu&Wm z4QpU0c9Zw(vg5wCXa9!r^giADF5&ZWl?*AQBBu<``cW(I;^VTgJ$$z~XliRAy0C3r z5I(^H(4)6u?4q{3*s;!Z#_tNz!`)B)th z3z~fd;=Xz&X_j@KTjwdcg=kw<1jo~-$ynv?C3zTJtANs7a9={XjSon70vBK%Hg#iw4OupJB545l}tz9qxBVI|h8eG~D4Ca+rg z)ymE__{(JuhYU`G`v*MFO;`*dK1+hQGUg^RFgd-Fy!WF&D#) zOP9jd;(YpXj%Qvl1l1 zs+mBN&B(|LgB@3l^i!VGpPLLsK%Ryh3cBjgdOhzkBB~K4U}=BYq~So)0f>k9=nVrd)EtLA<9b)6TU=Kkx-azFmd?%& zPMv1M?T$+BK(M<;T*9YyEC36;;ze^pj8p^l&8X1T{k&B1%scT7({hrqLn&`N+nclX z(=%;UgEehkj|0?c7WS+Hhu9$8fDGU{tTDFe!C;u^V?gmPBwEsiN<)e9lAd-iqBQpE z>lLihW_HChRbZ92BUIxN{s+{WhZMAgEro(l<6aT|M|c^_QA;NjJtTtgHyF$0UWy&7 zKH|Q9vcGuGF7>FJ|0l>dcwpFPzd5MHhDWE%HcS2+9y9tKpv#GPIe1}2cfO0xeBHc{ z9gtD3Q6<;ROKwbG7Qxa|#m~5|1c+?&@oD>4Rc9!YlvLP$WPkD++BHkx!blT%>Ce<; z6)#&v!fX@|H1o7TP6x51fqV)n3`>9HKN^zm)#ZE zKwUIp85fh;!D1o#P(zHfTrVy3xCGTMN|UHxR^eFlKrlPY&xJ8!RW*hx{6%jo_MmVCg!sRil<3!BDJhXK^AZPW*q8bk9ntxphZ8+cJ?8Va%j`guLk>JD*p8Xm%z|H=Jg-%c zNwUU*&WrgwxUtTP2EIF;yiftBC(8v=-`s_`%E*zWq#fIqUI zqy26N?E3@KrtjfKMn^Vc6kDu9dc&EBBi4RF1+4v8_3KSX6*QF^Ew<(GMicc%z_Jzm zysAzQ`Fi2ngSIo>g4J1&VRu-0lUh_)Ozo8N2*9xiNT~M2p_t$8h~#jC)lgC5te6uY zMpa`qlrFqaJmx38O)qT398Zep^t`90;mhJw&iuuqt#AeOckTzgk=m^ek#jnPA9Z#& zedmfU6~l7&jz7!MwZrpTGG6nW&>zzv^YAdcGTXOH&r&FWfAX$jV(w(Mz`*mQYo0-u zK=IMz?fVM8b3+`DY6Fn=cX0+r;_iMIdxPM$Vb?s;$)p%vSxPR3rtCNSA3^Sa4|6my zBIb*WZr`81Nn-~=F{I`XxR0l|cch0!D5oWK><5HEkpy@;7F7{M@~n^4KSwq^V@zYL z&W3bmKb8&Qz)w=uN%%Tmh@=AYEQs1iN(mDsu^H8xf zv-ef~E>eccnA{S|QEds@0KOFvM@(z1lcS^zsU`XPFf8j`qco=QTdX5>Mt@-`(%?$& zF@tIsI#+5EYF~bp&?NQQKT;m*jKN3S`l^E_u)$*t1GgX9ru5+$OU5^(M%BX;@%pR2 zTHW04nzG3ZQr9XE+rkEHI)0)1zh|Kqnw|kOuy8G)39#?IriO6wzLx0cx^6TCsPGaX z^Pyj5Z#p}*$Z5~}`oj*+usE$_#;I_!n=g}y+)vkaN&fg1|Ahq1eoLE>cfe!wN0Q2izZaO({(Vbr(BA`z==aO%-|A-lt@ih&wk1d)$`W85p-}zD{xb9o zRj3@|B+T7VL^iSVHwgh*_4*0m#~J(od#N>G|3BCyLbBXUsoPwBa3A}q;_DQeYe}bepA_7u7^a)BG@u*z%cGap< z7J9Juh)72m%3fe$jFS~M;g2}3U9)bShnME|5v1D4#2VT%GMbX;gz{q3pt&*N^P;Kw zE~jdh=ntNr5Tb60kM3U$CSyLd@S~`XHe+;~@)sKKpcW6cB^>QsO)S67PW7z@tXg#a z5X|dW+sFR2YwM4Q&GQg$Zf>i4eXalXKK`p3Nb^&-3$=(FE-LE7!Wf=8mRM3y=anxD z>@}~*-(%3tr*`^0_aa0~$;L=l?9&xBPZ9R91Q2=DEW3HKP;)5OqVr2EPbsR>o$({g zp8MUEbDp}r8WSm{ph&(_a>-LSe#8*cIu#LMYZPV@6}FGv`_^?cl*W_5Ense>Jr$p4 zTmI?O2LBaxhf z1W0WDZH4_UJ`Gw@4-vf2fN*V1G^mi5Q-$F`s)oF)Q71%UA3LH!E5|aSD(Xp8mPPT3 z6z=v#5ADOFhcs!9n7(txF7a=gQ&9wg9O>n?(Zgir^y~M@1~0EPaq%;J1R^NWPWpl4 z=ce`Pmho^$;S~X|v8RWRI=zg1BVD_BoIB@kLHOG2UD9F2uXOG4qGh<`4wfQuUffc} z-G^IFqZ@PdH{47k;~wzwL8w<&@4X&SHOg;UP+x5ql1rQqJP(6qH;j|we|o?aS;6U)etw(-nKg1Acc#;bJT+V*-W(R2V(oA{BmuZ*O5oT;h>_yd>pX7mQv1 z=K5Agdxqc9D24p}Eqrgm<5U7bX0|rRd%Q^h9H;cze)ehzP+mrk>RvS}NNuJdm;@&0?rKG@7B(vbPQEUgpDK(6Se+t`)D>Jx=p&K?44}Ump$} zg<#VYc9VC?8=@&+?}^M_SIcwG^I?2@P% zbC#*nEQSw+d^p7Tkn;Zdt{&tnDi2-SIi*mTi&pn0=&)dl-dJ%sNYJ_GwR1~RDaGyZ z1&IfA;fg_&ZY}a=qG_p+$oQ^cbX#t&_WXWGwNC>hU{q$2xrmT<5ttlJxb_N21k8DNIIe>@@ydmw4GR5AVghBCqfSAsp%zN&PpA zd)la+1Te7{2&1Na9>y;mCrrRb-4MA~m2@*tf3)3t7I+s$7{B!j<9y6vy``60$3bmRH)o+(96A3rRp+tm|cdscF}kp z?jTp^x1BRGkV`kU06Att@>YrH6PH7c@}{QR4-&(PeVPv;_0#mmt8@Zbt2BPa=5OHjbFrMVfs=U?Mvkv?32@mGJf)+e-!-dpO+jsB7nrd^h_UzCxg7d^NAQF(T|Ii2Z&_N~84g`U@KGe9M#vd+l)vseU}0T@5mUQ|wtp zkgr0k_JDNYW~q1eXGI8c%R6b`mh;f@n?}fDBb6GzIoiNer-1{e1aLNC7scwz!*5{| zC`v=U`&`}hW@Ia`#(!hr~RD> zPycC+^7}Jxv2*)_CYoRbxO?@vjzXDhRNiCG*}KM}1UZ%i`T5{=3f+NB2Wh`KF1l@U zdi=Jz0+&oFUH3NUDJVgLCOpoQ%0Bn-MvY2A4|12l1^UYRDq{mu=Xa{=b9r_iDJD3e zY%#H!`HJUp_T=U|AjNshIoc(lb-HbT)4AxxDT1OKt8JCC4j1=ui%WCQd*=FXtDp`hP9<=bAt|>?(YuPmhB9!)S*L8oEse0wzxJ8 zW6fk0;Yh(GH+v?ilL3{mPCCtLb5HXDAFTPl#Wiisw~)7HVK$SoCMZJ)IYST~=6XVZ z)&5DWUEcImV@WoIdc?Y_t`_#Z?YYQ}^3~M$EP>(gf>+zG2?JaZ(o@IPbfF(wwslRj zvotsM+DMLWs0OGdJg+Gk*9-JxYDjcqj0-g&j1LIhUn^yyiK(uePZ#VS)TF{9`J3yc(vxr=eC$71avh;0#1CL(*7&Y>P)TM9A@iR zx|+Cj`(HsSa?Iuy_wke$6W>=)E^QcW zdCI=CK%OwXK}VG*0#Jx#JNCTVkUv4Bbs6OCu zvbwjrpTFXsP{+NqKCN2n9ea)Jy_6c9m`;vQ(fQ`8EVs1GK5eJ8@{C_6s!NY$Cl*D` zrP){qs}5hM!D(3L5-*ddJw^2_d!kE4Mx7Z-5T{GIQy0ErHBemqPV9~_4|&iX`2vX*T+glEm+-|7CtmAv z52dk*y~6BP#WGxrPJ5@CLW8vqxmcmXYdnt8l+!NtxOk$D2N5NDQg=zB-K`#&kA@Tc zkY3*gQ#jQc6s*lpRdIeTcB(MOB@EBRPA{=(Yu97kXGkCA7Q$n}6Ru}d^yW@n0->Dl z*`dd11GC{DU~sO!`IElnIju9UaHk756&2^lE-nR-seT@S{pgw{O22lf)d4j#E703{ zNuO^8S)sK136jSV(#U<2u+H7D4%Zl79JO8{uwvQK5H=bY{Zit-2y1xeAvO}K#6qr= z@(O#-Iz=8@ry@uCz1VnqNf0(j>5jK5mSe6QEz{JNd`Ce!l)EzeV)t5dE)#VnXa%m> z;-1xi>qD1bUs=I{!~gJdlJ-j>E$(lrNbjBI>wq!SPAgX0!1uEB!IY{azOzY8z20H! zXkg6JU&#FgIRchEyy^PM`eg-7EJSbBE8~|2s)Ct{6@Kxr{wKG8)-;~FgAeoO^T`W9-Iay$w64qIiT<=X2`L)bkekSA5X*0ojT)k1zL zlS@E@tR$yW*}qZ~_iqKD{5tMGG#~OWbzL!HN?c;(_O{+vWeZ7~F&L@_CDl>~40>y? z3KOJ}j2-LW*U@Z@*&w2=n3jGv)TtmtS8g=GTAPvGxwAFLQ_EZmnzNsrH8=muDMmBGRqi z1cj6w^mh0#+B}b$xHA~>x++4y7J=Do^YkIu`ITgUf&H`RNr#)-sw&H`9CLP^n>Xw+ z7Y8E0b(ha=lxuc(q$-B*$IVuZFH@oRiy^dayc9Hv&ITqc#f?8A9w|%p)V%X=Sranj zsgPj6DUMQZS4*hOdOK7Z;z=M?J;JW1;dgVYnEB?X0oNAH=4|myv=>}*SpUt(9v}&| z(`rwaycEq3GD;ZBNRGnKO5QmgUD_6h0jU;mB_BEW?y%D`4Rgf zb#(BS?l0ih0WrV7i+yFUzpT^$$7}i9alrJI{);v7ptyM~wOIb`w0EJHcEW$3wau?I zGm7dz7H%)7On!CWpEF>zV1LeU2iXAJ@!!GmN9%IZZf|df{K+4v z5bz@?usv-~E97BI%Lr!v%ZclqxIvjy@LL4YX9f@+)DqNr2MpriF1#ng7!E-aniEo%X;UMjt14Ks>q;x8e%51Q+7@33B$zTL1!=bFP&?&eQfK@CW)`pmzCWpeA^LL?D}C z@;U&sI z%mOq3+t8hS{Rx72AE}Htjr)24m2goPZ+?QRn$MXx=dD%`^-&y(W2sV~y)n+OKLNdY zNT3Ql{uZqMEua6NT1)?DLcss&&-`O~ruhAmmeS+ccfOYF(?8!=yuz0~3I2mpj!LsYbH&1+f)(9YdkqT7nrmf@xbBVoo7*y$`;Sg4RxD{S7*A-3LG^aaxb1tM{ zLVQ)vB0g^4jB>>{3c4Q65~f#=fPv8-iTw%(NVZkuk%|(}_wq_5-G|^g{@gxq=Xa%g zdmqaHM^~r#(-*fLS+fB0jAJ5Tdn4loUx&Ry z$R}IXH#O_^nFR3E&H{59EQE)+sV#f^J+Rqb_O@8O&CnZG)%#hzfZ!o6MaoyMnd}fd zx)Sow9>poxL@?GMzakPX8QJ-M4K=UtJQ{H}J`_->nFH)wQ)NGmCG*iI z)JJQqmqW7^GCo*LO4K{fjO=mv^U&hL^RHlO(E^LQ(Jz)iN7d6s;=7WdNH$V$tM6`U z*Yo}al}FZdojNb9(gv{hZsy|kZx=v?8pY@mN&2>!`3u>-xoq3!NXCTaxq}QSa8MVA zveQb6IqzN1?;NFKpv9H<2FjrfsOdi9L5LRnOc@s8_imEpjN2m%vnY#}tL(3rt*P#m zwd>dY;(I>2o)yXl&08tjAFgK4u3btzHG}5Ug3!K68TgSIG2gBK1Co>iaAjj1pR0U0 z<&o{P1(Ffz#AT5Ql}9$a-ox|xi3_E-rg|1b3JWKWE`oM4W--fWm;4%Ry3Z6mxpdy` zDdb!~zV7g|64=4ggUV3l_NkrHW3Ert!2`)f@%M3PfOK>kMcW>2z+(jB`e0|prD%J^ zI>=_iLP|MzI{b62%t*49EEt?7o5b?!vk~>~_%5aE-Ut|07Pli%WP^AVCubqYi>+6{ z8dhq#hu7)FZx9_>oy$DugeN3XRGI8dxjx=K$w%yiS8f|UmEQp_kg(~e8 zw%>9=X%)%Vk#gRqv^-k4tnGonb5mE3SqL-JYOJe8)K&$&A@_jKpGs*azM~*%@NrlMkdi#=8R|=ZN?1>JF$U;|*Jt z69^Z}J4I3#h*Fpch=(rrg4aebc_|B_9&lH#uu-F z`+Xp2Y&US|SXQ-IckC9(fw_ z)(ou*`}TpWPe;xp4pc{hvt8^t^y++1fP62*-*-R%SF=TT*`x3%VY_=;L*4o)P z8DXG)`TCq&5>HI(lXTT2(L53KeOC##d}j4o|EwAK+qaR#S9^zi6VunCCEZh}^6Oho z^GgS#>HPxGK8Yc((n~*&vIu%dPyMy#A3&5k)AB1+-Dsln?!L6E{3P|#wkmG+%9O%CruKY)Y9RJNK zI-@}!xqBHF*L;jjJ*SFYGfCa0ed0P45dE%Q@=W0Qvk-a&PeYIGaX?4 z{64pjS#^8Wb0I(Oy~BU*d!{DHB0$#abQ>s|=>FbG zrj~rS>m2dH;Dk&RZf7Hk)Jcb=v|-ulz(|PFWi=X{pYjKn!?MfAXD+1>yKJ^at8$XT z`?%J^RW#aBp-(9z^q{I9LMVA~o01G(qBtAAGg+>3FG1dSEH6j6lGNC`ea*pI*Kxv3 zug<{Qc3=m)ha!E7T9A~y9|yA^V?LSn(nZX0O#)Lvt}OBOO+!odeC3&)6Fv_iX%}vKu9P1lfERP7Ey-fKDau+}$YJyo91G4%0nF?cZ z<;|tyI4!832BE_AU%8i7+;x3i8`-`0<71xDXy`p}?S`MA`;I;AJCLcnv@8zHWv^Q# zak}~fPInVfOc|P%${quPAMMmKYE_fN=J8#sRrczCSbM~WTib@4uw{i`-h-kMq27eB z8pp4WOv6Lkng~80Nsn)&{QR>?2kzr^5&{P6GPV309aIvU*# zNg>*ThE;$rdUoIqSH3a)_v>UxZOh`oUC3OqIv>8xQ3}l(zl0RH1kd3-Ri46GjOl-A z?eAs%jw)KwZRi6spq`}YJsi(}@@0Gcz38rkcU))<5(&9BB|D{MbqO{#6GjF`CkrdL z<=sy__dPwjO<3+>=@t|SVw@o$#Ome2!QUo2mF+!-XMmg8Z^^h{3AtYzwMJXS8Eqx} z!s;g|Bc<`A;qGGBWyF-(6%4Q_Rn)uC7fbjFB6ssDIv37F6O8}n6#d05{m&8Gzt!u1 zBE^4Y>i7>N^M9|c{O{QR4~`To>5^Gqx8{nJKn=@&f;4k`+-cdA^i_uRB2Jo5#c0vk zU$4ThfW_7I?Sl@IV-aF+U^qede}a5yC+6ob#THV3Yc2JAV~l^c{jJmfZu{Xc!hfa` zTs_9k`zo-?^7VIn&FUZgcLA`TsYE63Z;5jM>Dd27y8&SJe;U#LkB;{bh<5+2LV&-; zasL^1{4?zM9aQ-LuNsb(v_tfvBHmiD>(bX9Y6PV@#)=l$U%NH1mExrM&dlZ<>lpiv zj?IN=EK_wj;BSKeUe0KA5ZhM5)LdJwGs5^}WFuZXV)rh6wG=1If=AZr`Nhb}xP!Y@ zxrJz<5lIjKy~bA7pP+gA6(4S<92vIvt%q`trHGhKo!*hW*pN9YP_e2LX?QCKG$wY& zd{#g6I0(BS+VzqN^Hp$W{zjFXdLqMi|5(HGn+z0MuUM&!L5ju3Xz>n&*OOt0iBB$Br`%HmU=VAJ95ESf7%NF(YpbVpk&~V@NQ#7timdmbu5AO|BOsIHcLfPcfK2>vPl z#4W-Xf$ZG5nS>HB?Bv&hYqkA)|DxmY z*T(Zpq6{;$+US=l!#hZXdab|kwCf%In=Sm0(*c0^&JM#NTZ0AibrQknnc1L7T8D?G zK0?~8QF|$k`F#cYF3$RdRr!V)5-E@TYd#%$IElgr=QaiPzHGz;TE3ZLv-bzeV~%># z?pCUSizRyp+UI8Gz-?AO9_tKh3T__lsDze1=g_T7`CQ3^S0bJIV(5yRL(Z{{7JCvyyKG^Rih)Uv9Wi(__VjF{$*#LFTBu?6Ns<&99 zYIh+}>Bc(Vs)XXuQ_LbI9XkF?jD8u<0*iBP)(z5=s^G3WPb}2O;hNH&(e538mYZn%ND5SgjrA5=U&3{#d{I~K z#ZUYNP3b9+1cOdcF`=d57TcE9|PM9xYm_Q*6eC77AXeQUwEJnP-+x(v@6|*tmiU z#1flQowdQmtuB}xI~sBtefOiiXc`5m`H!2iGW&MnoJhh)fZ~bSS#bX!6iFsbd%z8j{#Vq@OdzW*KJ3HxM{%w8 zp9JXhO#sm3zeU0ypKEdTvfumUIZ%PL6+1cpzxzmSFkdpQcY4$4kG}wl%m?0O{Ld1N z{+ddl|F5?Hu%sn?S|lc$V7&j++^$K(G17jn2sRIVMqq;wZa3SgeeVrAM$ zoOrt1LID=TK%3$*v&~CYq%sF4Xnv4httS;;Ep>}^klO~g#k<)Cm9X#VZ9Z!1BbNxZ z^Qm2ml(e^E!Q<;fynUK_t1zSAGNhd2>7$r!40k|GlHSv+5+gCAhfVL+zG?I!_u#s{ z&NR55S26DXL$hiA!W7)SxFCR}&Z#S|=yzh3_?m;g-C*VhzZzpK$e~lj4bfQbuc5H# zxf5n3mI!sE%8?7I;p|;lP9*PQJSZl>Tdy#`rWkRU4hYMERZAyl(&s;Xhk8imMU2sf z7De3M4Xi%*JbFI&(g+I0Wqt1_pMHCKyk=WNl)|FUq}tD+U)~z>xXzsNN8P==$gnN} zYvXY2Wo>}gz1W%)qk+v}B$9amrfAPlNK0nCK##Z>&wDUlCr8siiW=a0qd-}VsI z&D`q>Yq@N^y}tc|Og2F@-^}qz&;se0K~2 zft-{D^d8odzj6!i1wN~h?A~t3Q6JV|jM~Vp7@{wzI?dGWmm;7GpA$IGldlUG7qkJ+ zUKFmyQ#ZQKt#KqfA|yzz_Ufks*syzU`9_>HQO0wms@UW9|i4TO39 zwz)4!#)|NSouz%`8(nAgcZsA|pOkKTku{DsaAmHS_1RQtDcaJ9Zhe=I*fqiwQ@CmE z5+k3exrfNknXa_r8LMv(d-Uif9VR~(YA+3+u2ZnTz3JZG4RG3-1%i( zk9{QNyu*yGV%85Wq`3(|E*})e*prPbYvmdHSXz!uRibeQpeeA~IT!ieOXeJle#aAt zxT0pt-MoNBs=Vis_?!g7s_JFvvz&0&J~%;_4Vi5Mj){e)cU?OZ<9-^hE%D?}kQMr{ z^}(r7)RXEOuCBpv+sfUZ7;9Dy)NxW#urDTN|gHZci$x zB17D9R%FzI=Hz^C4TU=GNI0lsw0Wq6W$$f&w4jsR%?B6M9V;s;9Tijl1l>T}KpwcO z4;OzR>$y;j6W*SM3oj)MpmXn~wpBfgGbXZB-_TOP>FJbelyl8;L7rd_&ub3H zC^{ZY9Y3^l=$T(0Xp!C#lLe*cZ1VEa-Y&}7r%i?uw}8K(Zf4cKTVnAMpx>t6#O$z3 z1XEIO(qiEegJ)ZZ=4}H6NZD=;f07Y^Lv=e#7Z$|=Dz~|$YaYZ{L`(5xcGt(-d%L(V zt$cHA#sy4&r*z%}v5%^$mT0>q5a=`w6Dj&N7tR;B!%$x3nltt= zScZaxZ5`gLz^O5vErgUzwlG?tv&x-rGa+pFaMzY7Bmb#D3yP;65CNLmoHF?~sUvlB zJGobxaKI}CJII4Pt@MNA z4emA35t54Dr@fd0k+?e?|BJn^j*DvF*B%5#r3C3#P*7@95&{DvCDPKRk^|D6LkK7- zNC-$tw{(}Zfb~%(5E5nN4q3(EarD)sg80y1+x0m zQI-nB*3=IYVAHpqL!&Z${X>21P|8;_|C~Me_bkHS^Ox4Jo#EtW=eajP9K6<|pKv}1 zFTDL$6)%g|C=>_1_K7WKQ7B%(YO1h+X$`H~mm>{GMH&MsO~#J}9+AkPPPP z((0T{TN=e`9OdzLwzsDC!}5y5ND86|s(@Gx;k)@DDbqq&Wym;wNzVF~gUG{fW)LB) z!y^TedOd}3`(-1dqcF?j0WpoGm-I7TAtX=R6~Y-;K~VxZM6vv^+85{3h%pXwgo5yS zcDD+n{uXx~Rfk|<$`x0O%~t1xn1<;udR|$`RXJ>uc3lP!M$ajS@cYGqo)3#%y_SX& z78MgExY*KPw0F(ObsoDXmt%!zOOGmSO>p@d>S(Vx+}Ump=8t*1l-}j?oIzruLX513 zFBwlO>#}#h6CZ0p)hp#GPLD;woP!8L*E*?kee&UFLmlC--|Xm62&kkn&*5)%_2aCr z>oi_P9QFq~QVj1bhien`Mg>W8#29tRVW;STN)`!t&a@Dj=YdIgdvepiELZLG9d3(H|Xm#vmpN3z#qJ2vAob^%xwO3WJ_bp5C1d?O4 zDC&0L1Wcv&r{22wugTZ8`*_=hr6K|=6mIGc7QQH{wnS~4$~zJl40hQ{vd%@q`3nKk zRy75uid{m3$;sIs%HRdOat?xK2FIj&KT#3~lpX(fdaKZ3|6+?Uhs!BNrsV8}~*4DWOGu&`>-x*??#~oh@ zL~6D3m@(aP!=cV;B!};ywbLz6GtZ-=bgq$WJ;%@SjRQ$vy%X*pC1cHcDQxGxO+NZ% zEM7awL{U@(Gd@r0f}~GtHSR}`>lF)$f}H#jM6OS+!iJyJLQJXy$F7bJUh!;-qs006 zQKQ$LV8~NE(U3>-NOKrAa#!)Z<=Bj}gL_xiHL*2FKOE+vTV|50Ew?f;6);_XjN_sz zJa9S05a0ZPTB~SQgpmy&E_XNn^-#f1Wgd5UJVdxX;3zw9BccEGVUqoZeD3XV+46Zi zeC~VAQr*pB^)5vK8$v`!;3;@JB#ijgK+>#HxEk|EyQC?uVyZl%`Ca0*gs>$HMg4GC z=2Dg$`yr`UNtZWLTG8!*4@+tOOo6_lu+=Kv!rdV3mRwg)?9;oR%7%qbn^Yk$jD{JXv{Vp+&BMX22u>lCoP{K^b6Bh>B; z;9q+l)gsCL*sYiY?wmbVU3vLvUhC!ny*YLml0K4uW&d!l?U1 zejBIMET^jw{FY+YWXREqXND%bq_IXTE4al)j`@Dgm=7`B_n-)d`7*0EZ1%ZM9HB%N z1|K2Ti)r$A5~^pp@;=9cyKpDTZUq7#h}!b?UO|Yj(h~R6tzk#QS%z1J-kH;bkXMrO zbcMSi_6bU{>b`5e+04WrS4Ay7+Kw#qQDp zXJFv!{m{16b3l^qN$WZ6mIOFVXXJn;f0k-3q1XNq#}-rk9Ss&YPCz=7d$NWUs|PjQ zD(n=SKR?cw?|RV9SuEE1P@6HtQ!(52q1~8~*dbT=0RhXS!2*5-Hl@nay0S+9p4b`2 zP>EMJWH;drjx$VyoK7KqB$h63HNKo%n~;!PQ_gr{OC)mK&JtSAR!nSuyJJXJ)v#QO z)0CN$%vTg^#fKZMaD5XaG>8Vf_ByuM+}Rqq!l?LFT-`=lcp-7o2G9J+o1%7;j~yRIWRv(^kS$x*wWDg$roR1LjSugT)XK{&89ztFR<_dNX>eAbXby;IrUd6h|7q z`J~H0YzR??)cGrrCqz+HgH|DCXQ{{`A2y3&(b$rC8}iZs-DvrTQ;w6Ym*tKK(3_9; zFqXSihPsa(AlZP?p{Fa*4+RrLgV04Fr?(l8dFdkn zIEtl@z8~@KATvXh^-_DvX%+y?IK5t`Uho&e;Xh8d`LC)l|5~){Q_7d{k7mf_b4LI& zQ?T++fj#@3Wx4-9Dt7n4`Pd9^=(O0lFn7J1y7T438=O*FAA2IQ+<&gr)qj@O`VYTT z|BLY7XzefXh#&xZ^dlVeKgu8TeR=(VFC6n>Yi{0XO4Q|LiWEp{Md1P@0!AWb;swwM zKqJ0g{>O(u9>|Z*@M9qStIZ7RPFEaW{A7L<7WQrHh~N7MN%{YJ-S98P5kFM}StJN_ z3j8(mPdQk8U7HiS3Z*k#AZfJLlQ2>6SNpl)9TnbOQRsI7my0w|Z2e!S7u73jxB%V4 z!mJ+hk7}|NL%YPGHRlQ-lR>~l)Ksv#0MR3$@H-gwUii;D&j#XO2D9ef_+DCSF?p_i zIxZv5u|6cyHE)Yi;BPae5zjom+Rf%69)ZT z$IzO!_MZTl>#dh{fZ7^cS<$cagd=JwmEll~NK=vTcisAy-_w)+N!s9OG9b6P&Vo*! zRgiU}J>)U-7;$n@v`vtIWAH7m-%8B_5zW;gm_ zSvgjJqONVjKLf>-ZTA_Jca?cLGEIt3cBK`1q@#Bbi2YaEd zK-wq>QxzhfV8i2k$974jYU%+|S_|%)OrbTLYQE{@UNXQTRYi18Vp*2!Ukgwa@_IgX zGmVRe9z+Aymx|BtnJa$wm7LW7LSHIQ%JyK*(p%FNX6er8$fumw>nOHhp%F=*VW* z8|ya>fBQOr-SDqoCq1AW2rYwpApYTwV1A7DKTkZH_4!Q)`c1$1bq_NCO}}VbV))Uc z{xqzA^r#;_>c<@QV~+Z<>it;te%Mhz?5JPc*FWs2AC~#w;iCPsTjma!8}$`UsDMS? z#D^cOt!B3#-U(>Ue|Z-d$JPx;)9G0OpBL|weQ;U%raV}H-fVnM>v&P{+W>E9%D19n zKZthZn_TMmE8#Ht9XapsX++9^-%-zT`7Z)6aP^;a(|lNQ-$RJ{{8ea=79g7%ccRJr ziygH`Cc&(qawLFbF6Y}Enr*5*d+?`h+Mf$6KLgZpV3d@nkXZ4NVTIe%kY%?--Ws1x z|IKyXidIhp0KD(ubpg`9{oQQ_nLxjcjxJQlZ#H3DaBJ}d39zy81OP7asBc7@rc(qux|uW z$G$}rw+YujEKE@1@p$x%*6OJ>D3mzG@CoQhnC)vlTcPW5d#yo=gP9BDay`$&&dLHm zfy2Ii-lZELJ^VR`net*!xdko~RPvc5BBK(f3;|?H`_Sw4ahMIitDBL-C4gFK5Kwd* zw4ihYGV>GQ%Bi&t>rN@yU5gqz#lfrdZ=vionym61Nyb9b7rGT%){-|k9ffRCmC<|{ z)GUZitH zk>Lh;!d6|zUV3fi75=A>XLo!fzaFL%$$hF$xwKhZ{H8*|duL_UJDy;$sw|{OnG}C% z2|t;8Bw>qlC#|$BVPpPC@;Vh=1Bi+bi=S-SnR+$C(_z(c?WVT*scM%hMG|&>^=Uqo z5hzi&Ft{^BWP#G5@q7 zxr0=~0ZScAOI%squ0&-mbce5^wr03ewhF8m{(5{pnZqW80v74&hXd1qJH};=KPi17 zreA0K(0SV+GrxUdI2&?QE7kX=?p{bw&+EAxxcAd*gcd1*cOa(+YXUkD^bKF4sK8^( zNw&E5zUKa&Krb0Q-)~9=G2KmXJPiScUD9=D`rC~aqwG%ia4gwv-;7^lgkGh)M(5BD zLG`z}SH542z+Z=HdaGw{C0aA>4OWFUJ26JKY)OG!Ml?UV&bBA4yj?83sWhzBV`HBp zn)>*Blbn&D&DeR(@?o3%ZUmZ!qp~=6dL8cms`Pa%Y=-^n{x>W}0dZSQIv9C2hD|?1{9bmv`0Onc=QG+%u{5OSPx!1eP8_1le-Hy|>wVxFKCc7oLt7qo# zsa6aQt{}SZ`ZhJ;Xadah@PnqgU8gdLj*{N5dNN6%9g3*BU4fbLKhrOizq2|XXVK}u zw>=r2gcPT%YD*#Bn82AJS&#D;udUr%IiGm2eByfI+i5U5s~O>`PEjdr762|cJu_Jn z!0BT)r|5f8l}d@t@6mQz+@??6j0)MN_I$wFlAIxcCw{xlbMj{ zTC>(kE@Ew+^nUjn(h*T5WtSA~;FP!HB+8LXaSwz8R^K>I_FN0DhatbD;xM5oxqw~NK6zFkurK~SbMOyu8h0BOt_-bImXwxRZmaH!NtJ$9DR*+QnOE6-wr!03 zV$cTvY|f*@>w8QkBQ<$u51HPt&#*t5)$GdEQCtDuOWOlj5 zH>Fwjx@!87BGnK`#lXm?$nL9cPM1nV7;^M2lF3RN8shq@qq$usC$gK#al3IkK8 zXwM+Wv*@s|te@h4o#i8o%h-hE8(74y0cC$R-_5oF7JE*R>0GQjr*CP{e8^1eEClBpeRdH+1f$ z6K2zf6Om%Iqs#y$f&d1|q1}u@tEK9w2_d=+_W2Jv%rPtsnO3gjvve(h29Co)qmsSEhjwDbHt730yx`>~NGa2<4PD zXLMKT93P%o88P3$)j2Vm681bnu;?yiYe&NA5!G7^L$Sda{|+qHxstrc%FK$O{@U@& zfJ>~WSnUsv6-!1w5gU$APD>L$QX;8Fzeyz$MHEFM)=9=nHIRReN*E{Mh$o~<-+_9&JN8*z_~>rQmI7ZBek>bfq! z{h(DX;Kc}gVbOxd6v@J!NQuK1I&K%ywJ6A zPb$$3IQ}U^u&4aIhtqTZlUonwWfZpe8M;2B*&I3v8*!23aRs`!Zggk$Ja+~(mo=Bp z?M^ha6*AZ16oZBeFDyBx^kg1>$$EUsO+{)5N$}1kY0uG`)b9wZQC%Q=hrNM?n*eG- zxnv|UG@riWyqfTAf4N`3zkjdY0u`b7UIPKk%>E)CyE4A`$V*nRkY8Qudg^^&AJE)& z9nhW)$aNQxNDYFE!nY{hnTq5e*vf7}73hY?~V@c$jKiruwuigL5#8_LT z{s#9W(GuH$jiWxBYNT<~Fndvw_omXN8iq$wy1}e-Z(j!Uhx1cDlWN8uQ^v)!MB01_ zl9G?N(Rmn2t^@ySKVNbz(E+AiDD$R)#Mz=O4Pl(~=HLoyxlduWTdlO1I}ywJNCYMr zo&wNIG*6p}*;Oi>98QPA{O{M2n&4+V_v5*mN+f`~a{;>jboLMhsHj{YHN1WS>b6G8 zoZA6;IJThz$4xpO?8^a34H7C9ImPbNjZy3sZ5rmZk6j5@*k3Y%KZ(UK6 z5U1C4DbRMcT#VV?DZE!%kIMjhD5SOx@^G{^&ZHAeGb(7c>8!F^=>+rObPEt^P?r!&!l}%)Gou_PB+z z8^w|HvGS@*oQMQmteS}wYZ-i#FFU1GjB?aqdWZeh;N^+C4ka=6_QH-GUhU7w4RICT z)b1})QSH@t*Y}Dnrg&(0Mf9XMixKrqar~FdF;9 zHhD(14J=yfQVb|O%F?dO^%rzAYA*j@FP2~&vLH`|8_SNz(>${w*y*FF}W-O!QI5#5U=&#`nR0aN9?ubs_1nGLz_a@vCK$$v&4x0Z>>gTfYe0 z5Xpv-TPXqD`XJ0}HfJlEC)L3-_YQi5j4FkE=`X*aI7^L)L*xrQs64LRwQ<5xYhEOx z&7soWAiClsNI<>YdIV%&4`e-`ydn6Otwl_%{{mm#+e0D>1t%MtD?haxk={YL^WuMWUnIM}KGFH? z?bPWGz93(!rO1lITHj@N(#xi% z1Lg81!Mkj3e21S%81fo^clvcflUZAspARi|vct$Z(8mZG8xRYk9%=~-i&_t?L^K06 zi11h+IzOLtT!lwjRt@rjx4f>E_m{hkLv(w`t9DDQ<3)KlRUnOYm~V(z-t=dh@ulO1 z=nD`RRhP(if78>UX9qfte6ig7a=BiGqWn$k*=kAfWG~HDF=Cf$*VUT(3bPn7#2rG} z<4t8tZ`$%95tX#|j2?%V2?eFPvsbfT_4*1JN6Afh2oSo7z!kUx>-%^m%gzkIrOUwz z7{k-)U}^4@IHBWw>Y}&j+4*b77a;RJVDis$bAHtyUvqs?XtBs+U~YEqDK~e>6D}nW zWJ8mZWJ=ZGwg^8GTftvWy0W6FB4zsq?#s~b^8Dw?^~o0D*=AX&Al0UX;a1Ir-naMO z?%r+Kd-)dbSF&9ClSkfRKmXV?fJ%TM$&>Qk&b&KmOHOoon_OvEMU3cUA}C&SS+Udx zwi+f+KBbx}mk~C+nuHR4Mb4)z(8-Jn*)#LeET9ip!*a;gUm@fB+zKb)_(Q z!C64;mC8xMWEY#DvYe=F^J;@W(k!T;oIsevySb0P(8lhXM>;QWFIBUN=f@io>ywpi zFGMIyP&^ETg-7J~G$aIK#^jnPr%`5Nhq())6c`c$Y6b8%z4Won{sZd{7{M@85$lqb zWdoptN+$L7e8zM(z3R{iSEVAN4I_{!J3=-2w4^ND!J*Mz&*Ss zj)f98354NVO^F=|CR2kw_Bxa5guD3MA72?pudsXl*=61vFX zzPoDEX%ibnL7#*hDxy$$_p^Zq-2>xFzn)$;s)yJV&xv{!ZF$!H}pu1Cs|`-Jg0 zYFzbfF3{}pliI0Hjk1qZlF+5>uWipGxU!~bjSe%v9uBJ+T9PmBPTZfcOb5IosCw=< zb6DxxjPUTw%7Mwp6%Bzem>FJ?ybbn|BvXA9xpD2A1uSxYlf$8n{yud=Lwnu^BS(on z)AaIPE#J^&AKEac-+VW^879RKOje?5*e2-7QwH5&J$JYBI9{ z2b=Fh%Q+hdt4wFPmef1dQtCb;VbL$90iy?$_aP=fv|;m(lh;XIgEMzMLG6%&i;T8$ z)TCk~x2{{LR)&eX-_pMCdnHtG!3;r?%8^hqs4#R}%wk_BLHg}^OwPtJ^!x8yKR$D| z+C!w-ef)R;NHn)mgO7HiU}Dh7TG^%9GM8!Fu?`u%m-#V!TAdTEo7}lxoQY^I)W^6G z5mA#=$e`uX74t(&k2mtuRPo8rzhSd2&G`{2PM2s_;29$Y!m--P(=9CMr~!zY0QDS zQ9yl6u;8=934f8~r6>q^p03uv{iekcb#7JQ=8L)tODDT~Dn63NWP*zgoC-;`F^6Gy z=)Lr;yqsK`iY6><Pef%W?>7~-MBV%+h*{Ep*-Np$MaXDv;)X7Q__<9e0?W09n4{EK?!l*An)4Fs6ye;hBg@&WxqFpxs8|QD zWkPHs;sT@^$d>qahr@{}tJUG%P&ChFoSUOQf;h*Q^5u*x17}uSF=R-x?{e-&kgsQ?rL$2hAnxXKR z&p&K2A#jwr^WNbog6&LfbDPH5dS6Uh&x!JEykC?pK2EWw5%s=6&M2qb$b~Gdvis3B zQa{66?+~rNQ^er}gbzHN&MzgI305xWg#%i?d!NrZ9!!cOzpTYx-^{dlKq7x9-oBfL z4J!r*o#uZ-Jdd6bk@easg}Ca-n5NK)A`m;`(A-Eb=jbG2RbL|nVB9%>wD@d<{~AA? zq9bwq1UL03>~}f-JE4b5E9n)8Wmw9Be4%uYI`_DN?A@ADODC)TDr@!qm~Xh0qO42f zV@fwOK4s_2>pKBsGdAIauyY|-P(4N7dZ@f1geQ^B*1d86EMm|HP2iNwFckNA;MKwM z!%%LY?ekTU60{tD-wY(u0!sSAT)#eF-Z=RXcN&6qoA%5{3v;Q?J%_7RQsmiaOjTU{ zIW)iN15kaM*7nUrT;jne zN;z8^vhK=ZUV1MguZ4V2wa{nSQz>wZGV*d}jR;mhqm$rL0&Y;fu?*UM!|)iGC&F}4 z#;K@pB+{~hk3_4Bt&7=+f==7?<+`WNUiI^0dh==duiGulS-Z-hnHm9jYl^-+2E%{^ zfWor?thT=>6IkYmQ1H4z-SbD_-hP2|vqC@}FZ~$2FT(WeV+H}d>~81kI|$h(O7BngKsn8+VQd)FXGL7?dOS zf4Tr^Z$kTX&im2*SAhhiO9ym+GZ#ABis6HPdAJBZIUH%)1;T~Y$L_4Buc7flr~SGZ zFPRH724@$Vd{90C{2OLUlAg+7g#x&ZYeetn0(e!|5rpCWQ(PCh^j4x3g- zyjL-1uYY}mzrH$;22wBJ1+@Nn{{P;;emwsl{r{J9)Q|b+$Ncl3u3$gr{~zn`&lb*) z_4mKu{xJg#&`|YjmI_aON3Iy1V(W@wAp_H_3R*H|6_}5zh?a{#$eE{9UR$MFWQc%| zv9+Lu(=p^6InB)tuZY@U{H*n&vVw9O{gQa$+F8`~Y{T*~`nszanR>ug*hnEE`&e*__HSPD+{`Jth1Xfrm@8>BmUda30blFVpB}^SWW)SlYx{>Yq^OTH zEs?naxg($$QG(whZ^Ll!I-pBiM05d4O;8<=Q(8304)^2+5{zHLr}Hd(#a6{6D4-0I zDH?kCMQPa}X+f<>$16o`aB>^Z>b+g|JMhqr!wL5w09Uop17I8W#r%W8vvDZR4Vyu2 z00(AL3@Mq_>S<(|%cV4aF{+eLtgv%loX+ciBLDT@bq-gVAm~#1 zBow|0beKT z9@FXfR;$x5&1pgntQM7sHZ=`IhYs`U@HZ-B--Qi&6$wh%wuSB-FazMl5!6GOIN5>d z8M=wE$;AB5!^jh+H9>{|N3ss+7r8~7(;MKuG9r`;kg;b8Yg!Y(0DbKR&x_M|v%ms< zQ}qn;c2h1rbUo{4v&}yl_GJ5%geUF?)0lN!>T_aoiK#IQX}<4{ zBbaC$(!67CL~jIj>t~0OC@7CQ_DpNG*Q&`A@+Drjj9Tv8Wm)s5>c?ggEDHZ$;a~d0 z)@VcoApkp)0n%Q@FpO!#Og&R;)1Jz9)8=7Qd=b7)rxF|;kp%?i<`$r>r@YeubZ}S9 z_+N#g;n!f#|0)5ZiO!Toob$TK%U(7!tSU4A;6%(_VB9VOxXFRbr}u-<3KyXDPJqHM z{m17&p2Ls+@aJ(+nPPkF%-zqZ-1jkHz2$bnz&|z$fl>Q&sn;JzLz-6`hxRK5Z~6n= zsdo#1wJtx=Dce7NYxXjl6}uo0hx;0H+G~CWc`! zc~^>5({2CO{~$FM+m~VyezS$9u-cA9C_A#Xs&^V2G=4QXH5?PYOS&e&aNXK*TeJ1F za8XyDv)gLbR`Ld~zYi9USrsYPj^i`sr*|vDJ*^0R5gx{-c@jly$C{=?v+5-YOSF_m z)zZ)|+JP$pfYhpLuQU?5BgzOUax|dr3 ziJH>W?aY}Ffz8D)YLAqQJF)Y|pv^CBlU>f-&Puc9WG_mqc^!rGu8U5N&W%=G8cFyj z4r**=q@??Cv0H$ZK`ar_upRmonWTbOc3GK6P7tI{qu?Odl$2UJ%3fq@P||7}-ipBG z1KiP9QOqjkQNo{Cdw2nH%&@G6^w}vla`%dW-d2out9|(FJE2s9?{>-k4dIyB_r}Kw z0V5i)_2Z1l&tl1|W17OAZrF4%K&FXo3O74%M`LNqAUWfT_#(%r z7|NwEl@rF?Jq)9i*r2Y0iGf$pWz9P1Rx>@)QN@}=23H@{TKnhCIG`+{Ett8-%Egg z_eK`BAZofQSN4{j-K4&$7J7ezU#-lUH_kdZ)d5ge>mzH`ZhC`eal^AX=Gi2t_=Yni zMa2c-rL|3ePbEk{IyG>afC3|N1KQklz@)hm2$X#=z5&k}o)YID{2f$^#+L3Guq+xH z91;tP$8UVs{OeL)h?6@J7%2^n(DX!yMNBIec>6eL&mv+TPqk-Q>~ooh4>Mva%;X2G z$`{Pu3kdZ0#_{a6ow;A{YeI$0(>++4oZ7T?%d>s_WXQ+snz$4QD9(7Dj!D&-#ax0O zuKjXnQ}PMypwE763gR*_{g{xlWBk6?)eRz8@xT>;DT?9n5|9T*+ysnYnP2Vc9w`6z zd+f%bLaYrd@abWb6v%ko_5IVv58`DXiiqf+Zm2XFGzD0yD@e+hjxur^h({5&MZ|jt zcLsFv|N~WsRfSw4I6sIHj_b&I}QPceENLPir6W7UQ11KB_77Zxe zd)1STELQ6jqr3ohEi;_kzCY2dgune&7_5lB31!H;CYXMH0uAs^115C4ec#{R|NfD_ z`#;e3SMi%V_%9c`mN3-%DG(k!m;$3|l~7P=AYZSdzuFt{KZ~qJzgJc?mJ^SZ#_-c_Np5V(P%=>hBFSKA;OTMu_PpEp)@$$ zy(sN&ieoI-wQKeRpTol2*2Y@ap!Ww*Df5!-V`91LWLp3N>R#Nh}c+ zf=tkc@H>@WEE3Yy;or7)Ysp)AM4tPW$atf@*xtj5{8tAi=qSF_p2ucVUYS{H_C|o) zTvC=hEjH_=GAzBh2OJHF$Ho-^UE035it5ldEx&7%b#xsvId_4ARe%jIDsM6+=+w8q zW2k*X@a5Y{W9Hng@{z6$6JrDnCRkistQaR^%2M<& z?{&i1N4H?!VYr_Jo%_qO3Z+*F@&94b7%O7s%&drtPtnYY7W8qAU zeyc+eZfqcXf-gLNouApj&jRy@>bJe+~p6#l%E87f_ zn@-7cJ=Nr%7q#h_xEn(e81b6(PPIRGs|!xxMxsjj zlCVT%kVb8>0;*lL&=36%q*mp<*aUg9t5mBIPo_}n=^LkJZ|a|%W@h%9YE#_o!M@$= z(x@#*>;DnCso_bJxk27UidJ&7Gjbv*n5@su2#8@=VQ_lWf@ZxG(RSGoe_dGSwTQ4m z2c66c zQb`xLEkQGIM{I0D^;4e{S!!f-Vd_0 z^!6_AQ#?%pu6ufvkxru45L!L;anyze<28aR??dSXH8y65Wm zC>L?z>^{UO3?Ehy5@%Dm(;v;B(_`hp5!fE5TJ`$I_?M57V_v&y{Ka!)jkFdC0fjLC zOOqSU3)1#ou z*_)m7eD%bSXi>TW$X#RyT!3b)FF=Ei7VgrS*&BP7$qZV{vCl6rzb-D}|_A`)-o>!BRlxE1RuJCcVSwtC*lG@4`QB$#i{MM1RK?Q%_x2C%w|Jeng*QPMCC z)lc(F)ly89n$9NYSM$^)3s7wZ6wW;B!o-X{v|D0>&fwZd@0W{H)irkSWno^n5 zqC4-+To_fD2QorQZLfhoYT}=g!2%HZZFt_wvj$RjF197-BQ8`EqIUXmLBOEb9R_d0 zxXy8uFF%0o$b=6|j#G^5G*88~%+`5NnF)4?=DSy~0&Z9X#kp@kCakE#=Q^8TnZ8n| zrpqylcEd2ArAJ^@Wr&g;ZB(?NnOTcqzxe&5LOZ9q2;6Uj2GbTU3isPBj{6&qg|4T4 zFk+-l0Bx=nvAeQq#9}#sL!Cv?VX(iZegoCZmh8!^|R;1 z?={8|YMK6s6sM0->F3ea?GbtXX5rD6_|_E#Ti6U8tfGlf$AW^MWt4-fue4reqL-ZP zq3aBH981mJs0qsmDm0gFe%)D}e9mHY&z6o5X!OHW!o(x0B zcPQ*_Ct53`|Qkol@AZC*CpcWhN-Hyi2nBg>qi>s)cw2p^6mJRay{EZ$Tn01#=*rm_~M8>OyjQC zkY{tshj3vd-{BFsnx0YscnOM3Id`c7MyS)|KVa4Wl{hM+A~XX{hG&Q>wAd-Q+Z|HwtKRS|3{P_9!>UV1^|zj) z%cQa_*hJoC$KO6te9cwddqrlbM&aQ?Wf(w+an+h%omyKfwk-t0v*XYN=5${yy6mVU z@V4)$?EHN`gl9eBOPldyXKj+l{dbt@JdEw0@e!TeH(&#@HsnpG*HQ`NG~wliS4Q+U z;sz$#rtUfPc!8EMm-5ycxduce*(5VJRFrr^KLnSGi8_DAB@4=IB~L>t=_Ma($d#jR zg%9eP^=ROfu^zHCFx*HB#!lOK=aUdB9L?Q+J0dMV2_dWSfFE3Y$DgJ%Eei^ zQr+egKpcqbUNSg`a^{Ut3zc= zAVW{`rUkCdVdx-@{6mFYo>ud`eBvj=^0Z9{gJdq3ZB!zMd;|`fXizpfBLsmddF}jg z!3ayvEvVyXtR{Texg7&BQ#b=khKm@KOs9h*Lwln@mT6tT0Yuinw!tMLnZH5Q9V`K% zIJ+gSyU#zMfg2R@K4&x5szPuRw8LUU=tpb)rD4{AZGDjq=SV_SwNa_ z9nrVLeDVY$m5Y{Ev_f=8ISfNSoP&SX2+-gPF04Pl|nvMQSf(yKumq}MCNFMs#; z{`EXPGT;9g0Rfu*RCZux+G8+m)L0VdsnnwSQ&(%==Pj8`b6{3`?)tthWt?6iEEN0- zW{s+>Py@olO`ix?%K3CH&lK)cRy>@oG_W7~dV{qa@HjrNv0gg={)}F1HgXS$@bf8e zocc|6-`{z>H{DHNt!#&aE!#u))h|FhK%B9{{L)OwVgv!oZ*5{dp};{Y^vyW^1F(3r z8KNk#+J9S{dFL>dF=_eoal@tk7nfqJDmFg~h7 z+c9LNa_COU#0BV1YhWTY{gU^tcg#kuSXH*=(WNu_THQqZc9#G)KViSCIDpX!xWmd= z%H1wjWW5?gdXL}tUJJf!ONE4E=XO>o=mow8SE5bwHVgN4#{q`x#@?gFcr7p8h%Y|E zr>5a@evY-0iG&6xi3%9eVmow-FE&um8jE=GhzDj_C#xftzaV&*1VPi;_^ewa0loX$VtGn6fL) zvGYFVuIpqbS7f#3Xg$4C`N4z|-%ElnOz%HjRWJQB5cB*p!&&T7d=;^aj`pxQJ-1o0 z^rt}%9|9)ydNmX!n1aBB0@Vng>iXla|3X8y>gTw&IIYR|F8fQy9_@ATeMZ@cp1$>+-Vh%9lgHhYFbE12WTmAZC?kbKI1g&;M_ z^2$>p^~1Nq=hshoK613lw{~Bo>b`)cBJq6=rSyjU76alw0=a4 zuw_y#wP&9ro(&ak-C20}LFcYmJY^A!ERwKl-s=G!jTYitq1&CjH=k;UU)+?rPe__9 zJshEPK+TH%4gpKaGjt6mxOBb&GMU( z`c44*Oy{D$_H*)K*M~a7srs3!cOERCdrmD{k&6-68G<(4)kkvAGjrc&eaJo0&=zVI z^Y_)yB*kih7~F!v9|uX<;91yhd~Rms>82#F!KSJ2-FrB9$wQq^Kz_ssb;~Kc+r*wL z;hIm?PoLjKty1clrDVG6lVT;M%PZR?fjQcll&wG<9);xU=#90hug21fqrhKIR)x+9oTGuh z^S%H4U$(y$aK^ngpZG9G#AadgZDCe8)rwCmQ^%t&9M^|(bBhzTr~gYlS$WeI4>Z&! zC7m2ddGwm%uGc-UGZG}TC_7A|29^@zlEx5ErLGqMb7E+A34J*h|53VG)}t~*y>=TK z;f_M4=j1FP^}NLcrK^T_GwF`s9D=vCUt#1gvjv1f8(ZsxV0G6)X}}Qdx=neY~lEnG~yUot~n)Vl=Qge`9Z>Hu&+2VCjGtvhO}7L-F4FoQOK-%z3$q9gb_@qSa>) z0Y|>89NMlAoJ$EEB2IJgw?8&&&kkKnpK>e5s+Uz~Zv$PC6U|QmDu-D{Kgqfcy~>!` zb552RAW{bRTSTu>OuS{76i!qibgQf~K~&=X%3G zzP2zw7ut1yA5#s+_*snp3i!&GS;m>t(V#^8OK9t)!5u^=Mg~Y=Z@y0X+E7_t3J_d* z_NOjH94}+YPJuN`zw_HKXC2zWvUG#B+UVrlh%Mb^Z<_;zZPgR-TQ6q%hP686a`dT5 z?f`uIH1kMfrazsDsNNW}W^hu}Shme4%G0S+0K^!7VPDKh?7(969G42mM;JNnz-xfQy;JIZuZqf z=FcrQJ8KOpjM9=chK7i1xhguZa#BVn&Aj#>@)1tKHQ!sOCP{3qQ>k-sa>bN_nZ0na z5k-F0{U|vRAanf)7o+jw0(2cb&y1`(zshi;9fW4ZwrM*Z7~sPT8P)$k?7ekdRQtX+ zK7`Vsbc2A>AkqyYE#0j&3`2K^q;x9XAtl}2AYH=HASoS=#Q0n8bM8HRpWm(L?0xS& z&-2=U3{0$9Yt4#pyg%DfyGZm6urB) z?i3|lNZ!Pw(lSWyGwGj5#kGTYkN0p<^9kTA_XFQ<*t!NrJ<2miL@nh#A1&3QpT3YJgU$uKJE2gWP=i z%Nc#C_Ok3!_lJQzp=nJwcJ8*Q^$14d%f}7_Ym%0haJ*?QR=KmQE>j5?a);Izfe{ZL zaP&Jm4bg4;KdmiY#eS=Hd-aO+x$~0d#x*s~42zJ7=?~UzQxCs8Myvis@!r-Q|N}OFnU$W`c{-FX(U5m7D zZ*VclW1pfBl>VEY?yql&|JCPza*%(`*Zu2r#9tSuFVj<>O9jxjwn|bEm~~PW2;Q*6 zvyMBC0e2c*fA&drQQ&6WgWtdbDfDGA9N_a8-5lTC{Q%8nV_az9eC@dQYX=n1t9RDF zSrGA#K;l;`iuu)YpbkmpU#}5 zx6H~qg*rl@zt&)=Xl?zwvsjt`)e^P&!@o+8vN4-~%BJ?pJ2qEvuD3ddn^OPwJQa@x zn9x7}NpfP<^D7nAP0ay1-k)K8e=*@D&;mRXVl{T|?EUVLXFvRENMh~=1KC0u9Tjon(Gc7Uce;CvIpG^(+C-eOORYJ|L zf7hSf-T&RV;eUGW?*B%!#Dv4&c8}7@6CIQp5j18eE}gy-=ue-UeV#UAJpWwyk3JfyZ>kZkX8sG*(f|0?`QuY|nSLY4E7kaQcRrQ;so9V8Qpo8y zUABA5Z`68!rwRs?*ltvOz~6t z`tk$TN`CO32Y$MsAo#-$GCu$$`vFR{^y{wziWD$KZ%&G+Pri%_8apjAgp(J$Ia;il z7)Wu`i){?mfe6%uyJ!d6hRhS@^JT0i4KpDb!ga&?()@4aL-YgA;!)JZaA9t}nk$d0 zt!Om^_X*FAAg1eUzH2dCUUQ=!F5f~gSFvGvGV>MQ@>dH-6&uy>RlBIqX>INfi?F5} z$J;!k{_>+VHQOo|-Cf7XN@o)XSoT0?ooiK{^mGmf+VfNoxzhCBoR?j`sB%SZPSAZ? zPj~TsobOvd15#CQ~I8Wf4I*SEDyUsBM_&6_e) z9F|0-uf3AWj$cVvMp2V=S`>Y2JbQ^??hLW+doNFyGG|U_FO01uGVAnm{*W6Yty zy>2%B<)NHejCB>QDC3bR6KJBTDR6B0h^;6Zm6pWgvHgdOziu4E^M4d`U@*edh8|F{ZKI4bj>2PusnP?ffA+k7F-AE{cCy~OvTi-_I-=+M)(3LA(7a!|iNz^BCtAZcw>i$UYNbZ*6@&$S z?vK6En+YPw%eOXKymmD7i8vegpdD&`I}-b9Gim5x(VtyK(y_r;WICrcb+NxkV|L~K z0;g4dhK}rrRNbwXaZ%JEjV8kcBj|M;f3H*R@i?m-UOjmzGZmwQ8 zW4YH!NNR2o`YC0Lk2RPNBlArhKtHl9GeGJibqjm5XH6ftzZ4)`J|o3%OkeorgKkPR z*A2yUJu)z>l6qs%o#)CI9Z$P3t~(T6_E6zW41ihw6TDVLe|#k}(m&;zUP z8^yFwZyhu-3v`HGG*NQO$YR14O&@P5aO!6EKa*IMH15AQpw_sy<{2PH!B+G^$8yoT z;rh)6hIUo^bF_X%6r|KD{BSN=78_Dt<>8(jU=2PNI#aYK>5FYM&wqgAIamq^xgBWY zInL@4_66p8St^lj-qxW9<`H-Hf%syxkcy zj15kozLm_EUdPC=9UhkV0hn^GA$)PfiN?giC?s@vbN&Ky%&}P;Pu^3MxhY#+w|!}f z6spl(B$9c?8EUgh?b5|pP9fb}KM8y7;zj1QEVtrv_xWS!C+`Qo#?PLj_{o8ip?#?Y z+s)Uk^?Brrfrz>cvth&s=O(t&g<$j6o%N6wT$t+0_zkVv+o3@DaCS=QgHLX_uI;|Y zeg(FB?LyOat{({r1=4?jz+L9iP)8_IxI3=giogEs?7$iO9z?=*k5cVWjMo}Vc%GWncV{p+UVGHOAV0!>?Ed7I0JM`%zy&2cj zaPC{<^%N}Vj6e|FZ6AfvMQk3ehtlm(d@uKd$@GV>D?GW;j{dUUsBP?k>6P}$o}czW zjNV}Y zTdT+!eqTJD1+hIMm&(Q!4MnN+1RXI{$oYi{IKGI7S~d6G3QEYcX1tGkG&n|56B&7i zz*l}Ks5^ZLdCV=(9GpCw*zgWvlfqFTP#EA`sLXOliApv^`$&Cx6(OrhXO8?OEqhIi z2jJv77k6e{EL%BmNZ-EH?F(+lkw_R5KGh?Imlo()xi;tqe{8K!-JlC&0aLFGY}vir zjBYv*Bhx@_sR#Gh5C!NhpYc7-C02`V(dAcJRZ81;spf%paVg;FyjadlaxDQ7nmtvyQON7k zW$nBN*KY`4&K*P-xv8QQ6KJj1o+{aGpsmN5247lb7${001)^qSZ%1qv!eR>gn$4!U zOKk9O7vBKQAj6RxIl0$H$^az;gtu$T&??Sw%r=|#-g3I$6eR-o4on(gMtsa|ia~Xc ztBYbpzP2s5p1V4&Su)c5s?)fYXL_`k)vF;5ODK@)uoSSZbv_c^Fu5Wmgu$S~XY`PXH0?N0}D;u259cddRY1pKby&Yz=r0F(KrokMJ8sknV59u!;3% z)3EuH&8%PRU3=RB#{J*aLWD-Q?C&qgRp7TB-5tug5xX2GP`et_y5d}5I94Bd{$ zM{210(v2I?OQ>Vf4Btd2J}bq1itN&bkFZ_i2l4CXUH3a&gtb)RSX{Mk_+H;#7LeK}-Z< zzCl{iV3331V{^Zthc)C?HT86GMgyWSArT?z96cyTa$RHaW@AH!I?oVMtcMzDY|mKq1iL7CWCANENb z!t+UpYCH(vA0nIjOV4agZrB_cKAF4~O7D!ix|xJ`Q^hIGb|g73xRc3Jrt*r|^yf=gWg zr_A<+KIhZ}-J(#J`%PzkAM1T?+K-kC-)}t37YTQuY&Ir!_NOaU3nt+szuFW^Y$!gn>~!K zl0t@~!*>n8kRKr6LVbfGcf+=N>DM-3#3?G`b8evFm|_T;&TmWIA5fmHP0ncel%(#CnSYR*Bqi~WQ`6OcCh)O z@7Ee&C{3`Eu5{xDd~S^wB=R7>G&dRI?sVtvzDgZ?U%d3TMeaQ3R2{);KpX6AjPEtHj}V`8Z8>_r)u=>IPWMnbD$}eVm{0& z0I5zmv)ajp_NJtc?A=@0A-+(Z_~3{WrT)EFRuoJLSw&1lDi@A4Q0X!W7GwI;$5CbiNlDGsluFSEYNRA8`Eu z;p}-DMmneIN$T-vYDLDo9qnsp%5sN7P7dVi?^88cPiy;ox1d#G=nd?>8Q?oipFnvg zD_S#8FS?s`quSDPLjjZ@& z(uB6`2dK#l;ODn_25cFBe07o(+!0*iEtH_2CLh4y#?t=k9E~1@_;>&xKq=7mdT;_S z-kC&cidGFIAq6EN!#&DyLWR6Vl2FhSmOjLYw>4Ju_YVt;DmGMwXXFMx4wuh*=l_)R zSiRN=XVFquVz0jXYyQ9wp^l$%c6I`e>dot~W$oyt?GL6Za$kQ*Q%By(K|-d=x!*L7Vx(uaLUoD~ zR*`0we*NYf(+|*!m}{U&<3{E9$$@P0zE;RLjoP}5x1%P&mY{=5`o))5G<9`=>h8KI z1S&3KRvxY)N!ZJ%od4jyI7J%+66v%|$Ep5=weKUNCtbwyR{14sZd&htfbjfkCcNfN z^k2bTK_K^@zHDrNYrnfFYXg|v6p-pyGgWb1M=~|1@!rBK^3&HMS|^jNVSh&VZCQ5K zGH*fKUWHOc+Suq)#M>#Vk(mC%(@(NEJ)_19Gfa=un2NuGHnLt9tSD>>OK5iljYKdc z&vg=ECEkqP;zg!odu+r|`3(k@168lQiF=i3<<;&IvV>_uG1dKA7D~;W*V~Q9IQu8N zzLcjSmE<%K5mqVlhA4NfU2NgEnjsl$3L@VmtKx;FHLF9wmWyCL!5;Ew@p*KWHsAU= z4Fuf%CnFM0{h_{U_B z0FkWG;YMx&CrqW*37&CmseK_zgu1JVv7Hp6c86{>1tz-O*y}Z<@jA zk`M=1LxyKh(f@M3K$bA_o zKo%#ZiNAABfDihAb+X3MD!n0ZgsmgZGCAbnAoyK#?qg)@m>14znYi02<@$?Ls};(+ z$jG%EwL zoXlVvp*oJsnNF!5DCIRvwhSIfy;r&~P?45vq}|Q?ve`Ml>_gjUyS2@!tW!-T=5FPo zs#UXFth)O;AD2Ow#oR`C)+=<=Y+O;a-OZz|-z<&6QF2S-_;O2xV~XyFVq~P15tPKY z{0>E^1@*HPQ#4bIgHeg~Qm9>UQJ6YB%6@s092d#hs*z53)*z!tzo#j=S^)&{@`|?Um z$((o+^rsGkB$SNVhC=sjtW&r9XkzGGdO+qpwjf5c-#r^gBDawW*SgyB zd>2j_YRE%l`Tod7elKyD){xJ$opjE*ep<-uOHWZf?)bWG<>)O!( z022ZrkGF8?zN5~;qEv}!hF^Ys;GWj&BoYkTSI(kxO()m;8{aycHTq`G#sz!jhnSPi zzsIP0a3M^q=t~E)SF$;;;nxW}prXvKFU#eAH!0?4>hC}LsQL+b6iUL&)OLa0{<-V__~X}5#~&D8{2QaMKfr(!I8=aZ z=D&Bqn0fUkxW*qYux6G+uHL#hP%CjHY*U=&Q(hKGf*i6=J_zCc0Erphrl$P>z0HFY zNC*1u0z!oZHNR7rg-J2;jFcWF)V=wSE5NYUpck7i3O>!a*pt&`qowm#h2r01U_-{y zmW%u*I<&7tv`A3)nUXSt^7hW(f2~a;vx?DqRR6;66Z*v8VQGPjV{=0!tgJZ!6eHDr z#LAyM6QXzKivG3~8J!giggmU0EU>KX8LF^2KGe${pkl2o60e7F)X_#g{}3aFf|&oe zFZ?}J_ph&`H&!rO+!~rf_;X?*?pr{;V0I4MD0O?=SjzK75_XK{&rSqR% zcE6E)Egy6X2?D}DN+`TT?|bT6|E=?#-{7?^6&#!XRy0p|C4!;ieEa#|Y{>s<)7u>l zStjZj>oBMu?272j+8{_EN7dQV2rFj}Ol2!ku7GH=x_oK%Dk7^ELt{jyOc4Qp3qbMI z3tM58&^qzW94cI5oi@}Pd0fr@?lfwFAG)_R50M0PDX{q$EpyK9MhXnvWMpqz>ap!f zQed**NXjNRGjwup1Ypei@Tq>Dcm55_fTqmpr>4xB0V(suHqn6*>B@Wv;)_q%)Q@9C!ZY2Wa65DOv}7_Pn*`%O=TL&9sOG-$g}Czphk{XR2#S zK8<{_l^dnEy&wdSk=S(HSXyqMAsgFB@ziH5&kL*QmS*uwZDgnXP>;HK$kX^kT|cB-Rn2o`@B`+bRP#-J>*Pr1-G<3fIE-pbYFz!R3z=gl2wYXc}k&O zJbo*Jd8#JykeK2GO1bt_*-;+sedTgn?FfXmym^3A>N!8jAlAe$%CcVfY4Lc7h2nhP z4SmrOx8nMSh*g>#E!M>ZlT`TTyGwYcDQ+eMPrER z<(qZSbGW4EV?YPz{-j5(exg;TIjStqK*`H~-Mb#)tWAwJE;Ffr{F9=6%aEt(ggAe08|ob*fO(B^Say0@3*P!kEWC3Lx=67ye_9 z24WvD+TO#5=%efY!^8G}Im7?A@!`vGa5EgIzg}lXuUhKaq(un@z+*!zrf2vLKTq7h zBgo1F^XOisQCvp0Z1|$bg-B6ogc=JoKL?DGka{!e;Dcv2Hm`5YYux(kK4jhc+&ysP z&dG%uEm|q}m8{%VpC>0P2S~{RC~>H`7}4%eIV+CAX>`DPNp7gt4B?8bmLq-gk<4Xx zP)vUVbHWc$YUK|Q!22&1BfKyocem>$lA3=xYm(}1Dm8dob$uU6G}xKD$9=#M=EvP* zB|@Kx_9PU!D2bU1aHOaWiYWxg19Jd+xbK1(<*20lT=0e;0?eZS*+{+-cYPdd^!#@= zaxUl{G2SWEq%B04c9I5fi5A zET-~}!bn-JowazZygY;o5_Rc{$H!y;EsU%%A^%|A6)Xww5~A zpC8LYs%_GRQAl$AbHniM>fYMAZK5CUwT!UWq}%2Q{#iFIBS#*(0F;qvA;LDuQo`#2wUsk_AgS%ai-vXt}<;5!CF7&;r8PIY^)8}G$ zH|+iug;*1kDC^61-#<`Iv^{fHibK(IVk3}l;v-m{PjjE{xt^Vmac;97kPs*@$ftES z0!Kg4=b#mALqjP5xrks-ewpHW%p2;O=eoUrdS~h@bB&F%Zg3BPY~tOa@HaTOWj%2G zUuTvm-(L;C%*nPSBKpv>(;2-YawfOL1b1FV-z|DRL0y;Bu*}bCAiMmu0(^KzhFPKW);l8@!G(y#zp#C38$Au!5Q68RD5O1EOSJN@uDx@3g<#dI9dJLapJ#cWCn#-err6~w%8eOw=Rsw zgA=wlL;Pm$La1sF_jXJ-MQ*HT=l!|%*v*@(X1F1biy-GuGF**i$(oo0IWPKbcAE*a zvUR!J-aJOC@}P5NJLA~@G!Soz^d4q{ZREY)6nG&GPg&!J7hsE{ zKi~Y)N$%q$Mm`#{3ukhVdkx~;n-Hb=Y>K{fiz;LXL<9Fi@&RViR_zoC)@s%c2`>fo z>NFi-IJ^jYUe|C+vCUY34~=%?!7}6L82f-*iQ! zJhNxq&mRZk?P{j1$=3%H3G{sSNco27KoRc8s`zy2DY8Y3?*4vALVj1C&Rr^+t@#L% zhpoAlsjKh^Re{m(XOaDPIWPSqZb}iGciPu8bp!*~RhQZ5Ge6O8*WBnC&>}-_S?o{r zE?=0@3P%o9WdSegSb+zY&4T~MHmM*HP*YIgX+JGR||?O zL}z2lL7G1sjJ!I0*X;Yi$ELwQjk5kH%+`JnO#nglzpFX^FRz!0Kc69+DSosw8;#+^ z2eW_xM+3bILDD)gLK;BP&Ui8S6S@4|R~f-4u)P8+e0Rf*%ppnDD|K%lsC1Rt0cBgI zuPLJE$Lx0}mN$>bZ*Kti4R6wY0K3-um-oTh{se&puf!iRP+x77t3%*5CxveY{*>SU zQC;3YCckHnximRY{p*a8`IkxTV#NRaL~ujMYZIv2p{?wJtsuw~N^M8pea2mQ$Vpw~ zYnj>e7RZxJ&;g;Br2O~-PmfRsE13H}O1o?xqeDOCLQQtp^`8t<_FD3 zA4C9@I~P&>VUD}JVrU;YYrQ^NjOd)-`s=OT21#wfG#6Cwz(>SK>UoqGzjFO)@d)o| zzb8T}=qp1dbL%Wbhy_wd!94H`Ox5wCxw>hFw(z>*Um!AQuP?oa%wK+tqwZ3E1V{ZMWp3^APcX29zm(VeE*_%bPf@kughf zglFQu_EuI&Hm2Q!1GT+TGn3ckj{90?-|bo6Geq#ay+Y+fS~A`EC691-&Ez8HQ!p2B|uiEpD;l^b-emiH>TY38e^V!Q%^DxR%+w!o1 z%Uj?Ko>sJ#?7DjTUA=UTaq)0Z?m2uNfz^3_7V9~u0Hm?#t-ohI0^G&&6$<5@GeDoAK^QGll{2mp6LL=0^q4D=mPZyyOwCL|d zH4L{3XIru`3>D%Ku42bK+7OI8h^kEU`!mUnpgi8PzykMG;9xY4oB`{n-#t zrR^b})YIo7`_d)lWN+Uc=B?72Okfx7i9K9cuHBaTB9b_ylAO-iw@ z_X}?1p3$P;%=!*Ug}!p2TxO$+u9E@2eSj78cd5`@FqILYn%L>#_`Ec*kvMDURT}`I zQj8#QTv+?aPa#}7^_GDe7s1v)vo3g}?LNe_drr_t#%da>U(oCH?nR^7AQdCE3l{== zsJA3!gX~E>wpUuNLNyP{B=_2xF6myTnxI4sy0heUd z-j;CZVHhhZrJu)m8~Y11nMI=->f~#0O0taYAc*bDmbDS}Uh{^fu37Pxx?GuKFsi>o z;O|F662^Hs&v~k*bHn?lf+fBm?$Q$Jmk;f0Chq^T4iz$WzARS|>|1oEBm+a+L{+9( zA+FD>Ruoy2NcA9G`Aq=~KbLoKdR-)YSm z7iMPnBSHfa75qZ&t5RIfv&_x2Y%GklcPxi18~g5SfgCCETIe5yht||1hWANi%xI~D zVkI3#SjTG0_9#gu-fbrwruh4f6OW0B9`m5A>s_yu5QgWhSUo<8#Q_hvLvx}{rI_&TlHv2 zMdii=r#MQDNm%s>6-uUG5myI{SHw};H=JU;l~If&Us<<5o#yf}xp{VID|8OIO5zx3 ze7aoA06JokT|>|nkSe&VEPA~2xsv5uyinR;Ytwyfe*Mh0V#G^)EoQgPLjkg(@-iE4 z1A{e40#BNynGAHIUTl1t3t?78$CE0I>-fQ7uSj6d-=G396-pjX8)>1QR5fZ3qjhp4=R`I&#g0 zlL>+v9cmr(DMJ^lLVFc>zzl3Q&Z8#E=d`#2I*T5K52VB#u_qip<60tB=GHhXxGA09 zPiL>)YaMJLcr~mM7pn2fleDW57FGD%@CBg2{c1ude4)wmO5a5=8XKh;A7N_WvmiOD zXvD8rk-F|J*%o!W>6$%%=|h=>7xDdFc`le`0uMY%2j*NdXhhLQOP61;U)10ZtC(ov zGSX5pzoN`5w|PE%A~o2eT+GA~iK&bfj*dnTa#T>8s`J9K9XU*e3@_;&o33O>i+7RF z_DpOyjf(c_zVLm}H21YEzhU~Yb;r^q{9(A%qcCNMcyU1Yo1fc+2B)!#TO4q?pH;e| z_pWDzyFAA^h?dWF>=}waUc}^#w>*R@gh;EZp>3vo$7uO2s`o$_$~G-6$n7E<4brNU zV+gLyg<5lr7E?dYjnWQ??%?6s@-(jpkNFaVtJs5Czy1JKbMv*nqic*mJ*>hUaG(e8 zY|~$B9;I!L3TlB7UYnGRU$qii9yYLk^v$5XQ^sBaXIQJ8Z_j46j$GzWadKU60j?tjU}I9Az#xhuU9TN zh7o!bCofRiFoBn#1r9se39R)R#}MbKEP}4AuJ-LLEh`aQVKff)P->BEQYN;eNA4Tq z&2Ujj$v%;1eqTn?@L-f!$Z0;O59E9bPL=H2s}9wuir*zC#$CsVfuZu3UUs_=bZ3%$ zJTcD^5->Lw@E(^uay^Z2aGV-1 zVr9CnTvT&PO})BN$7Vs^!=I94p0=2F%Ab9fLQQ-6LeG6?##R$GKgn3?m7-2F_B|)N zQ+l~bv{i+<6h%6}-o7$zy|oK|q0Lo-s>=OljpP)tOR*lth*brROkjUC)>mJt@6mh* z0F$>)`E84ND_Gp&VUs%cL?~HzCIa(3$~o%Eg+Hm>ZSM744sVv1A9+rmC*gsULKC;&5e3p~YN@Xd9cyWfK&>Y7t{`w! z1PvW2Xw12kvZB09hF>ekNpVJXa(c1J43%No_}Rkd0qQQ=a zY&Z7?!oB4w8>%Wys$*^#>2F$;RvB=}YG4y6B@5Psx`~uNQA-I+tN5;ImuU{KD)y&`FEcj@2$3Z(*s& z0d0?tjY4UI`OQ>q6sHC(W}~WjZ!RU%XyM8y_g;dnTDmE@s@~p}FBPoP$|6^){={%W z_VHnvG-`{|Fw~R12#|OC3$~Qr2vRX0y4mY}ub_jBinLRZ6fU!Z4lH?@K$w z1$+11Q#hO>?gvOzX&6q;aC=aGQc-o+vYtkE-djnzkaAPajw$-o5=oA1L@&HB1mm4q z@_meFv>}#5yTsU`2Ws;KfRy|@c`G17HH!1wC0Lw5<}`rnjbN%RN#ZBee~w#Xj0Fe} zGj~kZo2|f@i&iVBvaGmkUheNFeRlUTn>S~oBeS97tj@+n7D%w{*i zRiqo|#~P%P16>ew)OgW)rr2|5tM)rkbNCBnEOP*fp(!v6cnB-)4-k^c575SkqA;!l zVdhSOwu0bhJ!pA;pNV1+bwW&kiMv+!3T5L-W&+nWb_z?iuGHAE1sNx99$h>?7sAxC z^&%vy%uUNPxS*}|-2zt)6RoM~tAwCFNwdn`#mE(f=*rhFY@F2?iLSQ+6?z-2;Oq}qk7cnM+%QfSL$z2wI5*-EbofBucV+kAJHgKTw;vr|rT_mey2G-BzExjdS^NfT?Bxeijr5n>kk?Xl3U* zqf?P8$XE{i5HJ((iGI3Ty`l}P>0M!*mT2pLUu?XA0IaKgLF)99c<@>)$35_RRSo7< zVx%iW*O1vQEk)6#CPsnGDd~W&fE>=E%3$jJYPj=)IyyHu89qkV2zlHRDEK~rI z(btjRjW10gxpkBTd2wq7sZ=*NWiGCqt$^P;h37_Vn8)0$VM}jncuSNHWbg8(CI-1B zkT7H{kE3^l_t|G*)qbxXCj?Iz@O$;w*Qb7oxxMiDv^wT5m==R)Ntl9T=Tu=8J4(_s z@Oa63*awxKrZ7~0ij8HfHAtf>d5#DD{Bid)M!rrKY=kXemp+(<L)Nth{>O&oSPk9AgrPT-|ZA9PnN z1A&!Uc@WW)jw8mTL~e;^ zO)o#oda>)m&$WjkYw}c;J}QJ5Ysr@UVV2xi%WNExY3TEg)(}(?Qho79_LTDFOvEOU z772})1BL=~8M8@4u#zG}*4qAAt_51FHx|ZKul@D1MP}z!#K?=nA#4_>e!=u1D>^iR z8|KL`o@+iep(dm)5~-u=#Skoo3X?z}h3NyOdV?DbBM@ap?age;x_Llk;TFbHC(PWZ z38$a>nhEE$yV8Sd58~pFeK8$Z#j(ySUD@~LuOem53@+K5OBn31X`Xrg;4t!mjN}lD zBFCP=9X`SUn2?BmgFbjCWYMCoA#mL^NjWiw352dgwR}Q?xm`tKhfhA$3mgin6>qTJaH49?7z2mKe}Atz5r* zGObvftP!MK-)rBmXUupI@EQAZ0XQLAy_QE&nNbdq*KS@{26BvB(NlP|hssW!A4s>` zKEPUJ8YeC)F~Ef5uI=h6r0IyL8xph483UiSa)B?0z9qV$Uq79C?ew_!WT2>P|LC*Q zmjnaJsE;+%hvnjfA&(=evimfeCd(_i-x|oRe~CLJc=HffsMcimH1cSGgg#>O|5q_b zPy7jeJilU0-Y08R`}Fzgpuk4uq!@`7D^FfEenULh5(ZZg!H4@Gq&ff$!D8c$da@GP z6dTuQ!p$-d!7LCN<*eYdoaZl9SH4GPS&({Iv7?XtZIh*=>u!C2%7glKS0y5)OH8HH8uwXIt&SG8lpRZ)Ib?fv4^y~-d zE(C}dpZg7l6tE$SgxuyXsS&FE=A0o&ZuO84rEkZ z%hO^xIy(p98h-i{<=*1pdu>YzG!phH=*HBzgN_J;LK;z83ZdFlwinQlxYy0$xvr7^ zoezT|136(pn9YW!1&o7?V}wl5SZO0^(mkkWv)7ZL?PC*IFU(Nzt;ei{O`HdE(2SS* zd^>4P`$;p*SI2_zE$#~nZM$w0@=cO8*OEFPTnZDV=*gpFh<80^g6>P>I2q#gP=GE7 z;2K~}(z}Y_4M9E4M=J3oI7|*OEnOL9Sw5tAFp>;V0!Z>^-)fz4A)8T+ zTe2p4q8X*6+y)w-D&B{aHC9j8HB9(Uv%+g=!e1steGEFQVRPuzKONc=Lt9&0tjH;K zPZf{z;*GP)oRE@1_EA)zd@@$I0io&{`h>b_*GBdA(Rchiz3*BIMcO@uR`qi{2lZs` zb6iLhn40V%7bBnRD2%jb*3K%~Euf@?nv96{LDX^MfY%u2YPh6m^+h6J;}%0SOHHIh zZZmXnn64#TQTM^!O&OfUqK9Fz+H!QbXqEYhRAM*f;X&b}A{j(EGd`7sfN@)gaqDyF z#ffiR!SJMe>apIdV4I8n8Q-bQ*TiB~l{x*K>B{#!h+_f;h#e*Klu(;#;T|4Eu0G9L z<=r$7q#mKup!3-5HPj22BK43ys5^Na5>+A=1CPXY*Ee+hYo^V`#EE5Cho~M(xz=H1jz(M>dH4;RINtBU(73&YwxF=yht^yUeu3R@E}}x zvcfTqH5FG&rgh5Y`90pQ?r~KeT3n44WC7Tt@O2ob!BVm3vl;(Pa%wKz^a3%mqtN)(Yx9RAlfj zUg}70E%J^-jKDcI>!j{-8VoP``t+{IPDzlS+)JJkqH-_+a;HSvz?QQ zQpt(hJH*NIua6ig-VV`Lxy8k3ngd=UC_&!rR%LssQG+cV(?|V9jv?FKYrS7v+7Dci zeWdXbYMxMS+>i;`DNw)7Z7+a}`5a`PJUwl=LaGO*J*Yd0^J*jtJO4t|%G!&#V8!V1 zbPLUWr&CtRb=d6ne(RY$#HR>Wf|*UtUEXa!rVMS^-j zXl+9N%?Yi+qefrREN+OHuExt3{|-mGVS)<`VVnKp&N^?+l}{+NlMWKRRz|^DpTK)@ zx+RC$pE9qfj(uN^yX|BFlCpjkcLhm%$?wy#Is~M<%%qTFRfK25qUEigIQc$lPWeQ= z9`P!;liXpId)!^C%hfX`sP=8K3x$@VM|td6v(G}fIp6+BAZPUEeSi9i%Ju2GRSIhd zwgU_gfV}@%u#v8tufU?|>PE$fdmG$z5S^ zzDOFFZXBaET37R4^;W_J!s?X&Qz4-8K$klLZ`}eEj34ZRPb5)#M%hQi+KoBu1*=R( zA$z(66>;mkwR#>;z?PeqK{Ddr&(zMHMGy#tT#H*UAZghF%SC%!(bQhxU}?KU*j0NzJzO5M zFg51Mp~hqYy>eLP0R&x5d|rtNH2yWAR<+R6ake^ zL_|PB4bnwAf&v24K?o?l8j5rTh0vRTNbfy_@WykldOh!)@7;Ibx$pbl_wom1GMQxd z>{R&dehkz%MF!3k>vd#mJjmPoOlf&? zE`%4q56~3G+)G2+IvS|z^;PzWmISj``fRj>8sFhFfG@WvzCzj*9{Zx-7j<3hu6)#C zIlz=|u+#HATKEa5^!A=2W1M02E}c(t?3DycN9WKLv0WL;s)r&C5dT{MWstN!FiioP zNMBFds1I6jT}p;`aaSu|z)UaNskE)V<$;)}1U{4WS%_)G>Of;%@%*KM!f}a%v#$l; zpTL*mt!>%fh2f=pehHRO*2{ktQcZJalg04!$OWv(lWpnJ8^3CIRe@1 z>)|=kS7d25Le{n9V8DD5>xkenFAh#GDyTePcZOH!%4Ua~iAYz5|9a;7v81#Yi1g=e zN+d5BWSr*`cIIT+j30BZz#F}6^Be{2)eA~QgO;ppEJ}r<#>6U|zG|Il&2c6A5{`34 zxb1RiV0Aj(#5)Pi0%;8?*~0_*^npj>@>goYgOVqh*$ZCipgVl;-YaCKW}hr&+|)SM z(9&A@oUo{Ve&drSIh6gT+UY3UWPJ$(w9#$rR7YK0)2NsCPB-SL{;85jB2uaj@9cD? zEnhx7D4Y4IOYmN$yqalMBD6rCJ9}@MUoIjvxmRa}qqiIFpHd09! ztQAJld1~aXZhla7$AKg79}tdo+El78znxQyRnk41Ye9Nx7KXfhavlA|)?Z&OC+$vE zw^D-cn=X&OQS+%6cW0H~h-Ytl(7t}XJQT}zY~)lyQ|BQ@&HR&$!>jp^f;e7iQe2g6 z$BenYhiR;kJuE2}6GG>(_~vSL2kf{tb+F;Ga-1n28;pmP4~T8T1{g!rj_jc=^iZK}}<5__T*IfS}4&{pfhAwA3|Zr?JanypJ3k*GOPno=EhNm_a0NU-;N z8`ax)?F*?UHbg3tx9?E1jOEg#yfPjVnm=kh9CHN}D5L}&*k3h7Ch1Ra=?B+WMqZd@ zq8QgYLB?lt9F!636($MKLAvUMzJrWIAw~wL=EDwh7gx&PiMjsSt+ge#2poF!)CL-u zXdJe0*s8PeMmQ5y=5>n`6+zn81iB}1EX#%?+PYQ=kklPH{&D$yQ_a*!rZs`l6-7;b zM4<$A7B{K5QTP7YYI9piTOw_jKfI7NL%HC>!) z9(_tUOJf%}o8uq=1*;?yQvIw>t+thjJ=2ak6~@!Sk4m%Doz^L48A*Fp&(+|rU=f?6 z)ZKSZG{^!Zqvu(^M*1tqT`iAAzKmmSqhbi46mO>V4g@|)pRt<2c9~-5nd)jB_0;Cv z3p>8xhe-w-4;;@Xkh?2iS`p0~Tezj4x&>W1@2StY`qsq%NJT>F#GE1iJQr;vl{KR{ zmKS_RCO02NdUWm@c5V1YAamyxULAi9%+^KS^$rI~;`Rj4y@aX>{mquU5LjeYs zBJBBP(-l}xjiQIPzbGHOJkLOX&z##+25~<5&5F9uN25NqO;t|?Ij19+CMa_uoLdyH zDeGkT^TI{ZxgEE}cd1I|Z#AX^?JyR=*q;XCw`BX=$7@crtTg!fC;GGZJzl^^X}7tr z^qTHkLQGz9w%yo7=PSxQZXAIcx9N;^-EEI{s3Ap%KWSdQHc0|5&^e5k!M70^oU7OL zdV3m=@6uvfn+IotETF-F)|(z%5sVM+UCCnpPz`l`g?b99p=a=Du;LiT&x01T+#cg> zpXwahVZ#;y8!%&ip2_v;B45EpXjgAtzNOBJdu;uf<-^+h@XMdQ>^o%C5%KUT8b>bm((utX0ih$NO}^=cddM`+0>*B$SnhvR|?0{$Awbqfur|~Ai(uKEbL4>OHb2FRU=Y?* z4lB#KwZlgVAO5s0(m~Utv8kShkj5c6!c_{N`QrkY)~gQps$j57oDt@cTF*6j!n_#X zggyi8Hr`)&m%!SEho_3hXJ+pxdS^GtI@}&Q3hByl0jgO#HX*gDs+0S`&2l%ZgB#&b z$)Pbh!%Ua9AEhB(*w2zJzbRgAx-C(9b}--Ip*Yv)FXeHe2m1`I%9RrNdzA)Nk)I!n z2`*is0_9qtCDt)uWZ7VKdW$v=gQL{5Y?d@s1H5$WAOh7AWI+}udxgutW~9s=;$Ps3 znQ%^cV8$8W_P~Xt9@KJhsaaV}d#T5;_x{D5t!wJ7LJ`@_`@kboyLrBAvY>8Z+BgO` z2IAby*lCAEh!sxYONyfzv!39Wwe0w?Jm|@NVCY=Lbfr(%`x-yt_?#Y>) z0j8sW1T)lM9wZ+87x#wlSC05E!P$>g|LZKdYp9Lj4Uq>~7+eE*2SFQ@C081qgB^}Y zvE4}Y6+R6L0==RIt&+rga8D11?WX=s)LIXuu4w~5=~ttF(s;UJAT)zVU{^Z)V5<_! zAi!}E;0s0L#(ytv{X~$!)P|!0Ih!8FW5$^v$9%}`)34m-KTuiHxmWViD&b_lfsFp( z0^{tsY=Q}Qo7CiGnYk;0zyxgUI+B~LTI0u2qL({$ z2PEjeiA#E&Dc)SxH%O~0JYPp;Vy(>L3+M2pAy@hFg+E(cVqmYYWO}${Yfojbz&QYl z-beW-QF?1@$x??12a1FUUk?W0VC`cX&%=)Y0#S}{wZ{IJBH;gR1pZ6U|33x;{tO%b zr|kddL<6*LqOzg-(aB)ITnD6U=?4aW`Eo>@#Wicfi@sq7ERZC0Ur2-9+8@tV4%wqv zAH^Th02L{&NWqWT*k7)Uzg5)DpAoZGES^jE{FCXY0|`;6B0w~dC!6c5B)}QS6@7(H zPkz09;Fawb=4G~s{~SU8J^Aq z8LK`{4XCxbEYxTnK^RplH38e%z+#rbkO|A+95`7`w8iakpl}#vf|toQDrRy0hR@GO z!N9X6em@)3FcH6t>TQB`<6!H@pmq2Tr{v0$1iDf1DWcV-);^i@!k;c4RPEM#|;tF zi2jD?Z-{dN;ymr2oTv4ABOQAQjHj|qZN$*ta}jn5+pY~KXy#-t=h;;zc8PQ%k5!kf zSch8=E zVs{1TN!qqyHJc+$`9p)|*i$#v#Mwyp>er2@u$u?c|0-THoF*$HH(7WBWi1#mBoNg4 zg)V67o_(jMz{fv`-TnG}4qwV!5N?$?yOevfq$J2JHT*TXy+vbj=k57ID6O$Ug3kD; zhE0%+()b}fIpFwI=VIu%JD)+cs;gf7*>UEGo^jDYQ(OvwqN~5FN%A%PbUp9hS(DOY zuRg_co1x|xRDdgdAjxSTIN>fu7(h)61E_bSgcqn49Bdp^lv0On?-?h1@(_0tJA`Nq z#DVbdkb&1~>)1_Yu>@M_2hZEvCW2hL%agz;MC;O?J8bTO%{U%WG;ecf>Un2`gk+fRYw0?9FTN9?LFmS9o zjAoi-tjgbLXm9aAopQa$X{;|gwU5Qkqm=U4NK3|M*4 z(Nelv?|s?PXbR4aiQPU{6R>gFCe_VHEbjH-4E|6Y;|OK2f?3j9oZ z$OvPs#UU1|0+xudV^yEepPga7)h_hzaOkdj2f?**A8-#{$wAu^q$3ICkk1GTPbt`f z6l`Y+y0-~y2>r2H+QM_7fY1d-q!L#4fl(8@<=?+S1SJ!{NbH;+=0dcbzq1D7=paJJ zUjvVc?DI#bTJLYq6?zAbBTVE$hEnJ@WaA}`|H7eZtUCz%*;=xU9v>2dmA`s}B6<>w z|H22?8E+I11V6nIZO`Acn)}&tT6tY2U_q7zYN?@=dGDT<37OyA%+*(0iL?H@Sin6W zri9PjHT4MIf%Ec^E)AYsw{3Z9c=DVkYh*~2Qals~k|vRw6v2N^$w|uzPrb|Dt-d(L z?yIpGfSM{qZGFfxJF2C%mkilifbOmH>|TPd4ZwD%wMnn=!V2Z~0U2;%;H@vz(wpE4 zG9ug$0}+9t3$&;X%Y9%eNqGQ_=_i1T8l&=_T*)clzO1zw0Q>c~UqYy>mb){aTogsH zn9Uc|(~MqlAfsryv_aWZlJ}P0Xj&)Fc}xJM={I|#Qi03!kli5PqitHbj2Xroc6j@4 z$3p_=Dn?;*!?lF%YOnEWk9de&iof>edkKap3fyA3Qkz)9ZmP7%RFH4d9MAd|y`wMKAw6D4<~7mc%wq9t;c`dYowxh}i} zbD6jN09ScgutxM$q}Q-G06^?zIUd>MP4_;7nHP1>bC=FcDn1Ar4U@=|CvkH-JN}S>E(xUjtEB!>tD)>?S4_)%-}wWW#lBlTjqitGu%vA#%D@;b?y(`-|Si7 ze7M?l_~7w&usWYcRgds{QQh&kSQ`P;XM6e@l1ZqS+_441%Y(ID8oY`}g9<9hHD#Lf zgqQsuIicrPk&X3oCgt3W54G(+^GsisIp60Y5;I{Cvw`9ec$=L%cO!uAtHcLvqa<`!A6!isvMc6SkmN;O?h z>atICG*4^5@%&qF*r*(*Kvg<*8Knk9KYwy%iEF9f45!}pa)@?WQ?^JO(zW&?ba}X@ zBA;udnOS9#tCYX>)Eme4(%K{(w zFXt!4TRG~uXe>#$0`M@7BJ2FxQ48$CTUV(t3QB|pF6X?tLTlpd!|7%;#A?o zh=5K#buzqZJ}%S+=>8+{IK-1l_}2aI-vH{Lz@_XCCpMkcrjU>p-zWvDHwV(}B&id! zM4yOavgRZcH`Ctdq*{GZ3zt`a_=r*H_^n$Y-s1wRmwiDY+2w#4)W8Lm&C!Cq_|52x z=8L-}_K9el?-=wCwMI8~u_SXql{_~%z0p;;I#ReGV9cAnZ$rj~2Kka2#}~VNFDq+w zn!}^2m~TSbGf6}6FzaNs^aT5%IYY<~n*$NHlp67lz-wh9X&zSGgK!BG&UNcgRqH#O zNpiT9GI}0STvj#+)~l!#6qh&AxleNUKBU-^^gTnZVavxnC5*S=hWsFNJ1Vb+yo z%M)oFl%8>NM~B9gI1k1TO&o750vsLw>9*xcZnic^M6V9k7awjpaw_1p0;|iJ2ZI+$hU04@bG$a<^zbs`<1QGM?VOfn0xoc6?PlwIV{KMed%MbZ{R4 zOSA?)CYNTr8Z#RT-{|Gwd0oJ-AR5meD@n4~bwa8jWV{0UcS!N4UBnt-M<@rqu8z@T z#OcuVX7Z6Yr_%(VuO2c#`S4JaR`U!2)Y}o((Rq>+d_y&4)zymMu-`xY{{2}J&Z9Jv z6R==YB+K~L?%Ca49Ym4*bba^F^#w z)o=AXB+h5$f>cPWnko0FxQjX-HiIbtUg;axzS$KhKm3P?!1tuQa^-YIw$b2BtdBl} zZ%RPt7lao7Gl8L3Wd|%UE|{_5Yj7r~SfjGZW`W|KKmyOE;(h;h8Yxf918K=%!CPAW z0X4fsu@-s4P2&rql(Ll9Zfe6q1m|JJCwuPRH8T@ZPN_L$Urhd-bAyX3ydrK}0Bx4&6f>D*6OM1|V+YtGWf?7V)ekz_ z*w$VXSbUAUm*nF1?wqeqp|;$?^Iqzb5%v*?R6pGmFY$pHQ!~x9em~tu*bn4^e}>8b zZ{N>9ZFSnk3@Ro~dHAqd)0*bJ>#zlr@I<31i=W_v^*PWH4^BmSs}eG(KdYVY^;L+{ z5~Y95zJi6I0mDCX6o+Gt@o&Kr@m>4C6^vfF!<&qOt@7NtXSXARLL{VQSQ&ncj3$ zQTrUZt(ByC=$L@yB){o%{phu;1_KW&Nqi3QyErg|AkRxySrT#T=@ruaMUql+9EaRy z=7J-iL91iJ8D_15N4&ynjl29L&?rzZHGh1CB2i?fP{DzVk~N{-RAYBO)41ogE#5!` z6C_}h61YBdh(6IeTzDpND`nABo-Nd|2~;v`3CxN0%^pBkhxzbTQfNb=i{YivyXLR@ z=Wy=$_w=VH^;}{WPWgC{V-Lu(IKWjl_*9*IIh0OY^u4`#_Y-t8(2L&n#o<79>ju|m z0j>0-+&ZpSbdZnz@`=?SD0~^>_sVXA^rAQSO4m?Tn|f}T-JKGPxf>qe95l%(^Ev&z zO~d3uyPe)^V$W<362xGgdEfjt-$4?m35wD<(vkKde*RI13P;YZi9jIk>W8CeJchH* zRbq{!!y9y9rtF@XBDUWtCgv9 z;S>MJ ze*vSBGE8eJ??VYLf>W}PU!$O^UmEu|@H44?Ze9l)tu3#*(`#Ml1a&z`l&|Q^bw;N> zy&v12Et>Q|D=OJf^u(QeH0}nYJ2#wk%SQrArY$_l%eB^zLKK#!QB4D>6!*Z$Foz0Q z*p+fs#M@wC6z|Nv=KyW$q5dyx!~Yqd{J(oY!joD1EONIeo-y|V3b3WI@zHF-Wc zGJe|RK+mE`8dQJ<>#8YS5@AF4Zvg<3D_@zHe|yZoW(7WUqcygC0n6`&48Sx9y8HtJ zag&fMv4Mbh3&&Gug-)WzM#N@df&E6c#>=EA-jxI1Q&&!#O0LlTj&;^*Ej13^<`l2EsS7+ z=lQY8_aKm;>D*U8{q5(8XCn3<(IUQ%EaG_j4KDt_!3bVLoY|?wxGzHN+YMbai7VNK zTDMAP&-}yxy`~uh$jR`-uk=AYZD0LNnDDsn`~YQ#;r;j(lb6nFEQ(F4hlm|L!!xXG zjZtJg;!Ce}70e?2jmt%RCHBLAuMPNlvN437P{AD*g>A8}W-Y+FOTJdk4U7w^rw-j~ zz-RXD14Fhgc^?&Pi93k>K{N;AXdr?C5etdFf#@2EGlT!t6Qk{BCH$*BA>$1}!c6km zohW+mGyzSouMy?2^|VwO9@6^na!v4@eV|+H>%1~HS`*BI@%)YV9CE&v?WFi_ZT(|YYF5b P^;*yPTR9cV{r-OgHVF?G literal 0 HcmV?d00001 diff --git a/docs/img/pai_job_submission_page.jpg b/docs/img/pai_job_submission_page.jpg new file mode 100644 index 0000000000000000000000000000000000000000..f49a1c267eea2a48b986e52c97180b06a524264d GIT binary patch literal 127488 zcmeFa2RNKtyEi^U)F4G~lOQ2FA)-zY1Y7jpqZ35$CLxF(Jp>cI*XX_XUK73dHtJxE z|Fiem*=O(X`u^8<-uJxkd%p7>T#vPkXO^|@x!1k!^1JWn>ig9^=(eyq5zKoE23WMVL|pH~ldVRAg?Gi*UJ7_*uSr95_Ah4 z4fycT?|>kngLuZ2QZ8J^lz0EK>xBJ1_z-*rs(@bR>I&$#?bD$QuRtezSD-R;d~0`V zA_Wl+6~Z|70G(>QGh?=eiNd`1MA#O5)fZnI}{rOer3o#OVHHXX^YkcJ2cz zCdTC}5KQw}58ve#D^B}p=$>Zd4Bf3ZGr~g~>&)-=vb0*oD}Cvk%r50t1`ceWjrBRo z4}?UQghIq@t!zyNMC>t&era&ylg}k=oT%->F0HAk9BrP?!}x4d?rRo6_5j{K>vnX! z5xEMv%d$%g*Pf8W!*dI>Zcd_^e^No5p4ck#Bk68$o*PpZpEWAVd+mX^lNt?1;4EX| zoK87~VE7)X-;;zB`M54fn4!kxnjfjm2&m$U zo2*E>TbkMweigZG=^byMV3^P`R~}Dd-*-S>s0g{Y*qIGZ&9pcY@4{?RIjaxe$t6>V zA>@xRb`1cE-eOD%1pQ|(=I`|w&|WBN1|WLA%Z@u~W7t{P6=S>o{GA#+$QH-&vJojr(*LE9;174`XVBLirnukr;** zv+v3mbo#?ln4)W8LQ~T~x|5pT-QzDk z0{ix_Kp~Fo1nPC^2_#{Zokow0UfVsV{%qT3Y{_5NUX>%0rNd&$UD+9YI~7F0C71qK zqYjIj-Od--GSs{+duP<8SD*_(pnc_$$#ZGYs~F!~2vSa@3tfF7PBxEupCH;db@SVL zd>X%RiNv9JV($HiuSBTk!!!$>?M`FA&AcQM9Oe!t{Yt;QbeJ*QT)yT`lkJ)7!8-n^ zp~`hx>x2BjcjIT57O}z6Y5Ntvs~_)KvpU`7$7){obb*8yxXtNRm+v2se++N7Dlob$ zKrGNiwr(G`K%LVbzE~&rvc#QO#%F;29SR!N}7wY~v3tv!c#R!E=9hre$e%B0*TdgfF90 zJ;uEw-0iZUzHP|aDu4Usvx^@LBjt}})Sz;WbeW3|{8fKWHYga{oL({@ zw}Xz1ui6T6+FW~HI+XNGye>EHP89WW!ZBZm{iOedb=v;6=7Ui1H%ZoR)gKg7d95P;lU2n3s2TDnRq|JL@?UY)12b{t*Si6hbuA38W#P3fyq1O6vhe>E zS(s8VB;MT7rmS+tZ6(UG=aVT@6ewpZc1rpvnliw6&>q~vPzLuJr_?&VqnyjQ?7hs%M1QPl;_+* z>d$sc31xihe?7)EJ^n}YhYhf|)=x&SKyjW6$5d*_i`ePzQpHM6J}IJHg0I08)(cyj zx@%8l1voAL;%;7D*wXFTr}DW1;b1~Gel-LK@$A2^rv&2ubg?^YP-m6vvD_uL8mYGOTW({Cdl6%WV=%e6HN+;_WT|-!m{Ld5dP;~fcJ&}BKC}$5J@k8 z1scFYw*C9c2NWbn8J&<~mqxwNM9&^%=>K+oshdOsUJpW14_AdM&BH9fm45FJ3DpUX=9Nw%s@#&chSxsT9mQeBhkU%^#MixUeGDq?sA zibohtR0qvka_XPB5yw?SR4&@1KVf1nxOD_@G#ZSSGi+D(XWp^|lpBjcIDsZ$Lqo z-f8iM=3i3+AHP!&w`^L_yQA~`+tfOLT2Wh&Xg4n1Mt=6+0T%DtP_hI3o_49*8HzGDU$qmeNs zy`my%cTvFI%qZTR(n69V#zgoXfz;O*$||uVg{h;EZ?oMCHqLJ@_yRtBse8J_mqks% zYvjnZ`L??S11u)JL@7>eF(unjS5aA6R#KHjX*?qs{-U>-!1XmI(Ob833VoH|G>*Kk zj5+blwk#>*w=s(4ox2o|?$p>fzR^!2T(&3!5h;lOO&2je7!C(tES-%y{?Y3((P0m9 zOJoBABKO;`KrU*mI;bg_T`s+^+(@sJ)6PuEo9oNpN}D5RyQ$Ehp2GPJ`Vw1vedC`J~tn8 z%*7hdm@dSL=MMkqgm-X*?>>QdZQ+uhA8V=JovBS#+q}1%(tWgyjd!yt!8wZGR0m;j-X zX9_A0Ap%8y#m56IY;;FA7?u9Hrq^L0$)(2TOG2 zatoTah^-D`!um+rFN&4;Z!R?P+U|x5eK)1~%q{a(GMA7F(IE>DjZ4?&Nhr1)Wb5hr zU{T!7`Q0b7bBQiEegNL^0kVKUy7LA#b23n>#R`ZhhMuVi3C_y3OwDbt``C_q{``<2 z8;8z3o-aXRkCA0lYmX>bo0h|(?j-EHE#`VyW1;Q}J=}LHkN<4lv1$>LSPSsRaCObj zd76EN7ZIv`xRyQNv!Wk07U>+hVs%hna@sXBF{qY`l-rMq%yLQRp*C$JndFzn1t39a zadYH7x3MQ{k2lnX*(+iunji!Vandi*ITbl1r0*a%W7`pHdNjG+g)2g7UJ63kJBk|Uqs&KW*)$CoaynYrdE9fFNyI78re=>V5(Or>wS zdF;jVji&_D=M)(pl-UTqMU&_>@oai~Sr$9ZIeL*sj8BvXOZR=+Eyb1s7&_kGa8nl# zyV%&ugzF)r2hguT6Q$sDN(I^Sr7fGSU83RT*z_w9`yu6gKA%3KC--S~!DX)6yAt0E znt@VpANFVgB`Vw5+HY+q@t&{%wn$mPn^Q__YU(Q7KlH|~C&B`1avMaD#XFUXdb)iN^X*`w^TTwd$X@;RhY>N z&*cy~nx7?{`U&1-t4fp0-{eoAkO;x~Zk?&qHZ`P1C!FZE|3uXh;%`fc-&5$~tm~s) zQUSdE)2GiyAKjwpVP3Xc!Vgp{5%>#LtVE@^XE-V+g}02`UP@mxc7KFrryxDaJ1d?IL2aI{KmUFU0AFI<9pcK+mXLnF1yp-`!@g41R2d0hzs>PC$o)r3I$p^ zQG+5FVu!?uu>TgcC(}#F(jw%v<1lOUXD>UZw_1kB@%+#FVL(`{MoB4*_%fW zBrg`8uAqp|ecjm`_0B#+cBXy>XN0JgAa4TBJs;Fe!%4m~_-Tbu3;+=URJq^0PiTZc zU2`9Pdh60oC|HNYiRpg2@vhr{VT!`PG9BW-=zj@oj6s*RYtJ)LADDv*MpOWKV|Ta- zJHK7P76ZPVg=KE`Pp_dQfY)0ajy?k_t!9$#>+X{^*?PZ|epA0yQbMYau8Pc)N02b5 zls+Zh;a%auR7@?_yu!V*k5gyy``wHi298y`@pBzDawS}5nmSFD2Jav4jwzo`rtKoz zT+%Nu$1|Zk$J>28y}P${htz0HFPaVOadyN@cw^HvAA0>0Z62bCk|JJb`z$zCm?P|-ct$}6 z>&!O2IbFECpIs~XOM>A$Oi+GLSg_m9@~rW0bn;A2kxOc2sj0g9En}`TEO?z-dih%&u`tWb-Q-<=sr@gpajabsyBgpVmDscBkT;maVXBPP;z| zBEyKB8ZR+Rb}zU5se3FE zRpoETto`!hrNSL=4#k(#ct*$D;&xH;rQdl~`PFM}b48Xk1XXaSI-qnU4%i+kxI1(h zYh|U*6Boq%YY-kY4M@X?Xc_RA&I|$LDy6<@u z%dZ{2s4~U!gl&BpJwrldNl1h^dl3=<2=f4k$n@SARi`)h^s2T*ckL6XikAf8+Qvje z&KNtz!wAC;{sCXwq}kpkp{Og+mKEF5A+6z7un#PGtxjVcMZ3j69?Jc}yG3>c8aVaJ zr(5#;z~zLjKq8Q)#@p%p4!2Ff(3PN8cT~XulPYr0|i%+&)u?oT6jM%#>cqO5WUU5&*2DUvB&@5WF-WlrY^D$OFnB z`tRMiH$Ca_LZ$Qx=rY;d9_~tR_8+|0#vU-O{?~IgPIQt5ejJvpDa0Qy#IQPoi>4z>G z>mg;XKvEw8`J27=PxmlLVIe&vQ3#YH^&dU1AF(n^x|uQsK#Yj9VV1Z@P3hmeBsxfN z+>xLP3fa)yAYW*Vz`FuH!Md!0?r6euPW0Y29CARXDo|quORM8+C4jC+^wWj(U_(jc z@>k5Od{Q(IY&LQqRhrZyUd@=2X7GOrdiXzzGQPg=zei#yR(E{c=;PFB&2Nk5PEp!4 zZ(9V93M9>s3UU=k;?GizKRuQg>D*F=76ESIq>VF*rIw|~Oq{`q@c>J&AHIpDRHIhS zXL43F->rFgPUbF`PAQje4b=yBr_jg&_sKJ80GqNEuvY`e0eiw+JIv_*9e{{5ykxiFA zK!kN5^R)1g;{+!C6ec~5YK%p>XD*xlY|mE+R|&VNI#8aFI%u4K>A6Dbx$4>pM#JF@ zgii8j3($?@nrzo(`!C|QwD)q8bV~^T!~CCnDv`vORyufujmZ1$V;OSMxjPOAEN!y9 zGKG#PKif-3P;1tkAO4r7LWq+ z!3T2mTZptH_s>`%XYjKMq0nou!?h>nzs{2)%r9#Kr^Bn#0=M?`v7gMWokpV0@-~@- z^Ciy|#=pI>Q#~oU=mYQ64D2yn9!mak1LJuactzwgKNL8?hA)1kURJnX0%`g6W~ixJ z)B$vkCs7=%nu^SVloih%p6j`DL(BIdwXoAZ@RT!bR_%E3_JLW8t;G(Y)F=g>nxXKTqH`0GODp_YaXYe)4bW>=u! z2Z#JVT~h_gu`B>%OStpK8HtLWy(YyqDNg(T6I@{~QQ;!c4xQ)u@^2diI*GUfon5fk z#xJgFW%G7lgc*ES=2p;bXJYD{&sKD#U*mN;FDiVExC5Z*fW&L(E6`-wQrWiN4ah14 zz~KNdseX=GE50_bu1&H3I#Ud*O<6(Q%;-1!Uip*Q!(Mn3E@_Meb|8O*%Z(aK8{hbz zKC;Tn(m+~?g943rd+$RVLP%b}S>etnRT$sB#qXKh+jIJHL2Xm+M)|QG$l_c$@@c9g z4=noj&2akH)1M$Up{TNLP3uVWw2dA^V{r7{U?a~(Xsw*RHaK)GNNhikA9r@ZQKynK$lzCBEIQcCyJ6aYE zaA)EM*5EMg6NlI?U_TtT=D{b`2vU+Uh+De-IZ-kTBjFU8Wzm?}uNk;rT6jy+F*Q+3 zaddGcPv*?FM#9LP@`vQo8HKE23ywirTdw(el%}swLyVC!XGuk}lGxzAssR8k^be_^ z@@q`)Z>XYj4fOjhp&Ow7b^EWO0UBHd2!N6Q>Jp_U_2H3aml)T)P?FN?2|g3n^lP~k zjw@8!-wSb=yE83ggB^i05f!R!JTYo`rsk`9>SJ151{Y87Pv-E$4ia8`HsceF;*pw@ z5K|qa>01=iU)RL%PoceA`uwKoTO4{cSbr_4!l-qb)&@KKLoV3|T?}7OXuNCWC|nNk zExKjd<=%&U<6sv|AZZ%tuJe1ob*6@{P*df8A3z1Z&rVqh5xneMCtcv_n}MVK}pCIhv+b=X>dAuX7Tg=Hd?A^3E0wW{>b zJ+kKQnVmpK1507uMGiVp`9U^KOL7naC%rW7?JPQKd!sj)o>66NJqox&=bS#;`oY!DXI!K@ozrX77vNXOVT)?E_gOBlmvjex`%0!d zvme&3(M4ck(XXvyW#@T<+uZ!ZOdm!_S~;3N=&Gm=b2FVFC$c~>HH>YimT+oLY9vpy z?$90#!W6m8V!j6RGp4);qNe{CAmE>CN)=N7RXZ#7ySD28i(=^4o`iq%BnUI51Tu>O z66jx-O7+)0@V{CppT@5(l>a*I{Ms@ilK-=X@;3t6e?~3;)|U7$MtZH&U&;dwQ%s)# zN0PVq@2Ch2Atp;m9mB$<#l7K-Zx_Q};PJ@sW)-=}==i5&KK@Y(@&6;^fQbR9P4cJTEP3gi{w5U}&S#)o z5tf4va=wSt&nx)by2=!vEVrY*73!Mz;rVMj_tZ;YGgSgZaH>n)af@GtKb8-Pj!gSGME=S;ydmZl)DD73^s zKsIJj{Vg)F%Sg`vg4>sIDC;L3RB)9Mxw=o&*xS2QQ_BP8v>thmGab}~deWf6Ki>9t zsxp5mEpFNIm%<4T@&^n7eU$l z8FnF=!3#pmwwwIEUvAUK^EjMNzJeYx`M2uw^Saf|__hV}YKroOb@8dbCY&SSr#d>x z6pDBxP;NUZzB1;QZ)t1P%SIwn`uO695QHXrj~^9HQjt9ULpqf$PV91m%I9n%v?<># z{sqi@*rCV2j;e!oJ7){$SVf-8t89aY6z+VI>APEocNxI*$>v#BqdulcN5f(A7G}+j zVKu>Dya@;u#F?pkwMJJX(lRaA9bp8JW&83gP=OtE{ZVN>ZFB69`qroSex{{gs1wPS z&UlxZc^5j6p-ywh{c@KWN&l<~;bTYitFcIi~yjL*9c+W}3a0xhc1afYrb)1zAC*g>DXKD$`)2R*n$2a`5DC7g(UifpO4$QAT?o6UhVHjnnOgG z#5=)$OjD(W8)x?{+h;TUY8KRa!ah2yl6=cAc88Xh(@pcj!-IoO3zxJ{k{4=ZOhd<8 z)WhUQ;|0i@1Gwm$-~R~W=8Yr52jxBuT^!AtG!Po?Rbj+n5rfi zrkEc#p20cxpd!_?p|cW&H)3UNSlj>?%CG<0@|onuX4=us0}#^om*HA|?m`MJ^(|6M zD|)*QdV$wmtWDD#Xw+!OLd>MCxBHVGjT**j_3_FzQ6&`99b%|iErNy+K~0}t*}GCS zKN`^9)ou7_k>gg`z$qh>;;B+#jr7FN_YYXr@{4O2>Z{?>3@2a46i^Qj@5ncGaitFt zw2qWn?Jnx1k*xs&{6_N*1L^z|D48cf(iRZ195P8v0eB9+b`c5XVTF9DY_+n-Bo|nP zZy1dVDy8M&bzQL3+rtpp9miQ^`CbDT9A@!Egaq-F4kc2kk_CUKlaq6Y4ec=^*I(mZ zw+4qCgF^6>LQ!M1uJX4rzD6$_785^BT%?>~*Y(t$d3_=F?He6>(e5T^!V-K(gImE? zgQ91VatzLJvT;XfF5YOeal4;0*z_6tYz!rLf3 z8)}Q3+y$z3e)y5KJP?&kZo}=Q?yp#Unsqq$V?)A*cbnt=6T-!)_pm|2-U(vq-u||@ zpeIxjucWx6{Fg1JwoX-NX{v|rK9eo$^_F~{=|V?MV2h+HGgc7i;67rt+iE%8aW$u6 ze-PXeAVP;l^TAEk?IpasB$X&!%dRBhwx>8(AckWQ980-ZQO-C+TjvEwtdf?A0>T>- z{aU!f^JxW5p)EVinJV`_7XgmaCl1aI>YJtt;>VR*GL)Q%m;=Z?Y7cR&$)MQp%G+Jq z4*gI~ExmiXlTY)8+N@t?k;zrb_0;f94S-#a`(6*pr2CJrwg)U)Ns8RO?FUxrOeMg<(PHb+l=lqe|kJF|~?wK=}tPrmS;N%c_JqCU zj0Db;b3<>OwfR99yy zIjec=KCiz9R?z9oZ*`9OG~H}Z)JaX4MMqnfh#R$-{?Mq`d?Hiv6^FB>ohD;S$SeK1KVMxu3Avnlf<}Atc;PZEuCq()fTGH?Z^0Gzl?uN<+oLg7}_>| zRmUG;8#HDu9LSg#BKqIdnz->#n)jcql%J;ZKfAp~g8VJWjt{)~O6 z&3j$(Hjie=kXcoY0FFpc^b`3@A?!Af!+i6J+1qfbMyA1@HLaJD1XZ1o!HL}HgF~6K z{N!bm>FHvW)~1^-Y(G`B;=TCNw2o|8T*bEb1=_%>>{C78cq88O(|D`V{jOVG46%<1 zKHMM>lAMUISeOf-5H1QJUiNO@-&a&R05F5b)m~WO5GQ?aE&`DPw`jQXig|fxyt>Cj zb=`PbFUTs9l2JHqKw7wW%8lNRujtu@DmD9S?GtXk z6DMrBk7KOTnx0%Ph33p=QmqwYmM7uc@eRYcv_6ltH<)FQ`2@LB=m8>aSzhn_Om5l2 zA1E?*hHe2l-iI7fG{E&x;y}87VH8Rt%L8~+-KVFiS0Fv0B7{hODr)}n7a(hgj(FWo zoVd?rpv@McCv%rJFE(;TKltd>aGSaJXa~KGMtIKC-p#A8=7x;_1DxGE&vy{GtEnIC z{Yw+745v2=gXLl0KDefTKc!%9-eVP%+uuCFA=x{<3yx}EN%nA5RSJa?CKu}OpOg`P z(x<}fgA&_tBGpu?Emb|`+#Oc)%QBL zLpwP9PN=tnth|IY)ic43L{a8)F-wnGkNOV747_a-uS+w%t!akSs+72L!=y96rIS7i zb1ZOgpRAkpUvLlFdL*8xK`g^I&(-Kqpwu)=0%!0;%Hrz29LVIhCTrm|t1t}C?pO<4 zLt16HlstB%%>BXEA8L&w?q1YYTG4Z0eeSfF6G{TBP*{^SyYvXi56a%As-kc!lzWni zFfO!+Um4GSl%1ls^&fMg3>#ts?1O=LS=*02(&yvkanVWsiDJNAS5NLQdG-{OiI_6P`sQ;8c!A9zVB*gJ8chG6XcX=m<9aB*825Ih$VNX}w&5+X`GQ+vxb zcPq5j`Fso-7$35mnD$*LWTovJ_wo}F<_B~(Vm{hEco3^?<5d16Hky{H4rS*fwR`)` z=X>(|qrYsmvgD6SVB&u>oM@~V zkrlJD$=o4KX{nii8l+0u_v&Pk{Xu_Mrl~f(AE#h#@!?EgtdF&6X=Ti;u}fnF17iz| zK;UG?UF9t%*ZvzC4k6yQ@$=AZbFS9SixpRuu7|I9^&`*5%Q$HIr23__Y@RT=`j^?t`#8^a9>+7L z)E*A;S!pet_~vZ_U}m+){N1hOclkIiaq37Yiwf^meOKHpN@}=n+j{>bG6CGxiQcE$R1BbjXGcb2LB>Kg=JnodPFehnF;R-Zf z_-5Eqz|&BsVwCmO>o{JJo6j`(txH}pnTCS2TD7Na(f7&;>!+o$Vy(jT(9T2X5%4Iw z=fu0?F!&nQrM=#Y?majiO7aU(jShMaDhM=OECfHM(4u6e;ao|M5F?9v@BNBP>x?kp zjd=Y+n!8{-Vs1H%mr(DATCTCvY=GOh@vK5vnGRN~E_PMuLb{6>)x);DOmkuXWF+=# zomOKKlg&MI{FB6#^Y6Z2%o_FGo(lb1X_X1V`-*1x?rOlCe+N^vP}EWnU2aOC*x+Ji z=u!m2w{Q*t74=(s6;2l~^}+ZDy1srA+ESSc!ZcTJi(mG*J=O7HYswm^YW-;_XdP>~ zcB@U(G;n+2(6NhcvM+=70nhhn`+km^&}Rb{K`m~kLF{>}P{J;!@~ubo2W+ikPFM<+ zror~Qlg&!!WmU0-$a_sL-3^)z%w_dp1lFLkS}Y=Pq%|Jea)DQ6vo}A=O6@z|Q~oge z^L#{p|FgZ>LJt=B&Hij-cVkaS*0xqQdbPNoVzG8E*ZHvV+A{-+w9U2|2E*Y{PX3&z zRFkm*(UjCAqw91L%weu6U zr}$Bi1*f~y>v%|~GPY0H2^9n$LIckeQ3>&S-)#!4oa1<1@0M7%Pn%-JHw`-i=la$; zYrTJJ&)ZWmd!5;y$J0c`jkB)kGzsUbMNQHU(OPM+d1EConTkW+lwQF0sM?1uVRSfK z0J??U%UDWb*M}9+aU;{q=!oD6x&68J3bhI7_v*A&QzqHke!C88br5ld~5J7s*Y{S2u~4!Ua7 zI0=4qD4oT3`w@@5vYKShE}$c&U=?BRIudFR+}}}ZLecMQgD0IT8e2cuM2t(^gwA5z z3cmE*8uGweEQddhT3j=$FMnV>$w5UE66R%*Yrywl=K-5pq$K%((hub@me7(w#)2oM z|7=C{|EBS;gXjMkB;`LdaBRAH{s>m^L3IkXhx1N7pgLo1{p#*yGqR!V zLAew(N2r_bd(rhMLCcUnZU*a3*mHrvQxg3gjgPpPQVi)H$4l-t0#_h1Jvn`q1ru~P z2Pe=C$tB(TyQRfa!+8ZYNo-sn-iD~AZ0{9sHQctLm}{IEelfSDVt7l4{2e6SmE8xa z1opLIWBc-o)K2<6(b?3~TR?Ul&Q`~YZf40&Pm>2QxOOU-b5?pqqf^90|MhwRL-% zyoxv&Mv>Kq^X%J_z1g`n4Pc><2*2+@*CxRi^UC)(dOTQ(w1V=J^n)3viD>GlHXs9m`VV@Kks;nle>@|I;xLmExc{D5av~DBB_o5#rHM}0 z{4Vx&<$myeh-JI&=w<(<1+2{-hZU(ny*f9e-CPOTVaI^7_t%j&#hyD=EYwKRg`#sS z|#Jl#Xv6u?yomE*_BP(Pc|B6js}?@2gdT<9o=ZdcH4xZ`~{&ZhRqu zG@LR!J}!VQ5Vt@LEEFmp6E!uS7ec!0(01~cB%Hl(s*tmEf^I~N<+HtMEjM}OZP3z= z38?>dwxexI(WKJ1OOC6iFM6U^NKL3d$8w#SdCE$nU!X=R#*Cg^P3e>6o1v%k6D@{R z>1mmyaP6c@pwdsn0i6yIQF#eut8nVrI;g6c$oHL!2*T3s^zMlWW7YsBW=!}MX#I8N z+lCJR9054A5thmq<^$Vuv^h1r0{zGx%k4rDR0oBq3bnYt?4<7OfrhxXSpiA>wK6Mb zWGfB3LuV)hdEH5IsG(3=3#Tl3h<`Cz_Id&Lq_d=@RPK`|AKh3uSQXW~AJWYHTC3!~T=K=-S4g5Ql=#TUG;hlCc1Z5qr+n0jZs1lZQx81s zBRyRCoHf6pw;5yl>v7yZ_T^3Yq`ub-uPR&2TK zyrzd@Z$j@Ob0ZLO4yVFART2zuu*0*C$OaA{)#XZz7!Gq=$_fiwd2^(Nu2mslS$23m zHXqh>DVm8ER++Zwo}JeZ8}xFFxm%JO95tsCVjHEyLZn~K1Kk=u!6#^EQ9`oR{PqquEMG_nZGtmQFL`Oc!zaZ2)RQpsHM8;_RPFZf22XIM;Nff zJ69Wy_ZW@}MG$~$O3O2M=>X`hV#I56xPXmruXlbLd9L;qDGO97qu4`LO9HiP0=L?r z4Pq;22`+M{I7q-C|AY|B6n~oS38nROcNwE&K**Rf9DwuPE?sS!2C8yBU;oOFqq&is zR*Mgz?48>^it1)l1BZauurlrsatS!M91Wj?qHmt=D`%e=SJ@`|aHV17z(7@0w>H z#fhL!K~TYo>WOzQbgFU$F7>>1y$Xrrha{+5rx4Mb?gu5+SQ5ALxUl&!luZJ4$9T&^ zxMIPthiA`|?BQw-i(fbougun0bS6fIpgouN;7`WA$Q$7kz4!$==nf5FPq zRCxpxcKee_P-rvE`nq^%dq-csxC1u}eM9%bpZ{vQznX8AZ>Cz|)Q`>S>xdPl&xaXo z0aZ;q*E$kye0FJ)vjrWCLfTtVPk0}Thl~8c3HOY1C0>JIuR6e4N1r20G$SO3{2+-= zu)AfI+;g{NO;_0`3ngO$lAP97A-~unCOG$fUOCk@jK;~ugnv{%kFbM2hwwA%K6_mC z=7icQJ7R0X9^H9UnoCNU~73p`Ad%cvOAEiGres4i@Up7p2fx zF~DtO5j201J$Q`Wg1@+YS71J?KGj0q2>Qzk8O3c-F=QiPEhOL6fIQ zahEV3>xcUjVZ+1AQN1DNY@YpS$i_lvfD^LQ%KRE^avO2;J1IX>)9V5<5W*B(Z}Cdm zxHQ}~m`d5Q!=<5dyZ+(k+>iRZF+^`=@^6bMA5lk6hdX|91-dtJ&X3ppfuaA3)sxOS*;ubYw zRTd_dCSN8lUAQdFi)Fc59?eypsnu>yCYFa~$ay*}&;>{>UoL$owAZDapVDw_pw{eE zdzLWW?FkZSNWxl@&~tp1mNZUdY_6P6TnB9OZ7+@){z>zgcb`J{=-fYnz`YX!3vwUt zAoN~7l}ZhmN#hc1F#A!Nlwc)uPGir{^K8OtSyVu`n5h5$qbW(M7Rub?Sqz#Zus+&F;eYDf0&OsfEg zdMj>mBS$gw6-h&4f4Wc3W_!AHtYvpw^{=jl;opJ||3mUmuJb(puYcS!x{h;uSAsQ$ zNv+8hXhF5kQb5YU*2eDZ3T{;haDp`=w?wUvregaqN!-`D*c&f~LhB zsWKqvR&IHz`_s^9M`2QgcD3WO^JFW0T8e!7(+iyE?sw(%G60uoAhSiBuX#tbh8x*P zkC~`;|IV4ef{(7&BGlJT&h_D{FN<`^-9D?wQ>8%FY)_%EHQjV^`@vS_R5r6GDfKjo zE(d)yoLGLsbQ0b(40V2Y05vk;BQDnbS)1c5_P*??f8p9_8RcmFs#!PDFBOUeGBlZC z(z|B)IZHXpK_L+hMuswIuM(mPrgVPvA*NbRz32yH+WTJ`M*UJ^V)EcE+7Z@xt_xxF zX+7!4?tFdBtVF#0roPGjTZRemb*zX33wdOII6QT6;jf>{4qt>2{6NIYSTfgUV>#Fr z&W6-UBGjUL_Sf2~g?>azM@hS}luyto8P3rg9`fuss77%*G8U+B);!@yFsaOOmwFB@ z<<3K?ET4<#%ym(blaunAh#cdl1*23ph%wGnQ?y-Ng5ermV#nFF5!O0k zmA4_kr77N@$j20!G1HQh^c96!IHW#W$RDZXo@$})qU4iNCReqHzrWXZt^)Vuwqb6b z*3$mi54nq7D7&v4m$TtUDVf)8TpOVp9+%Gb6dPn*=TZl2ReoI_0|@d3Xf~2*F09rT>tk||^ze%?n^jC# z+)qZ(E{=B{r<~t<{O~`OGe%-<#UCrhP+~8!QdQH4$+sC2rRIcg zZ!$Bq3e%LpAH|P52TBmEFYDD&x&@A>)K+%29ZK3yL>WG-?9zGCmI>sa=F@EVq`i8y zT{0!NTrYGW0l8srh;Uupu~?u~tK!Uouj*)@^V;PSb3ZcGj8t%hIN{;$_Glq)FJ4N7qT5QNK<0oDu` ztA!YoSj*dQHLB)m-agSy7I*C7%ztRbK9}8gtNeq`afRi&Ylk-yEeB!WiU)PIHG*Tu zapD!fNpQ<5$ZV%z9r!57C!A;oryu6xFqgt>LH`b3(Hd2K(mI+buCg2Y(ZIYpGOgT{ zd>)KFhtRO`JsKbO{F*WM;PVoBzg_Bh4L@Vv=2)~wRgaUobdr~r`qx5AF1b%DN$ z*~D4(pP880DhOe&FA8o2=@uEzz zmI(bhwkBA!%!kviB6 zPv1N25o}hYo5EOw)7IHY46|~xvQgr3(=v7_jfL5H_UWah)U%~Knp^##FxYpNt;9ULtZ#13iTFiDT3J$FhpSs>jqyG!&3s)zEjJW6X2{-ofNXb_d*Qd_kCoB zRn^MxVwy6EL|22bwC(5WEPc~e^NkfJbx+rggYu=Lv7}F(%BnvhS%&K87A5M#?TI6- z!Fx$q#}Hq4D|=nc{X-Y$mhcv?_l}&0rY8(3C?Lf_Mq|m+U9&Gg#_uyOO?#l6bFN^; z5_o#ptK#BEjE$8u>TjcP+T$taYK`wQmcbkE!86pQ2gcRA_ru}vs# z=t|~zu#D%i^`5hA*;14nUarM5p%~ENSo?)WDJgzT>4ZM7;bhV(I|J35E|V@wiw_Sj zlQs`FpQ#(vUW(a;?-B<NndD+M0DH2dq23#&dl@IqKF& z)RvTwvg%ImV#Je@8QF&GGUp+LSI4hF>RRAAk<5~BY+kmMA&nPHby6})vA>pLcvx3^ zYM)0k*u~v6 zK{mfQb??VO&Uffk1W}V^zS(^<8xxbRBJQTPZ}bE_*5h~$3%K}l#xfnVHfL$8E4)`- ztygDCh@bUqQ4tPT810*UF%@*u3=7D&D#JwQz{bnDH%Quv4sT=TsD5_Ntb5qv7M)aE z{p1DBM(3Sv2(!=7`y-$vgqG}l9~WiY$gZG*B~6+O2R7MPo88!ctfhf6Jo&m%{zNJp zmLMv#0lH5)Ib2dgAmThh&TQD7E0Dq2bDiCG`-7c!XVVd?3CRo>IXRzJlEFzt;N&W5 z(!rIrHs#%pzB===w*(=$J3(aD3$972t70WU1WLVH- z5%s*ya&&*D_nQK?IXVBhEX0s0}8tUH5P(tZDDhX=TBPjQmW<&o!_P#nU zs1;9{=Dsvp4sS+55g%T#6si-k=;Ed}VCpOr{+2%1d&60wS=TRVBto+UW3&TMt9CL1#Mbd|EA0g;@rm+sG*(|{@ zf{3A5jD1JO9{uG4m3a;IP7UeZ^e1X*M8oj4@4C9Kbd)Y0ilXj)a^V*6p^%(j(ttOVG=}#Xs%eqU}Rb9 zNpATHC8*iwh?Dt-$?3$D`Je-Mzj)u%{j*QBS9+bcy%D=O6x~?QbVNapVTcMI!I+!p zgj+$5a!y!c*BLkYkjQyw`eYCn@q;tILwr z9BgHD9nuQ;{bwwe>KYzdTp$nhtrhg49&Np^8V`f?_8BPY9iOmCQSj5>kAg(Vw154gDBh6eFgTq2YJq^NGC^-;*^DoTa^El|WV6lP@kHN~(It3Hp@{@N*dW4+Hkvn%7sg zv*_E;C+aiZO(us%@oezh^TyJB@58ul7)^bL>W^4jP{Uj-*?077Jl`GC&9Zvr>E&u? z7YJwhg<(p(w?!Ji|GW(uziD#e<%e$G%&AAe6gy_~YZIo{v(HN5K|k)TVR)t4&PSoo zlzo9|F{{qIIr>tIjeAbC)baF89t#f9*`9PFiHe5zf{ zQ@P^?3W8u7Izoe~7oC zhraPbG?(<_riY4k6)?u0U93?)mqS8o8QSVt?+*{vI|=MbdoCT#?f@vjiWk3!#`yoy z*v-YFPPr~lMuRrT9zE07iVic~7DkS`3DxjSQP^iiU!I7FSjvA>c}VA;e9%hdYG?0= zgJip*exk>PwZ~%B7p?of&3k2!VOu{OiQ;4D_c{eLeTR=saoWe&*}W`KPBE@IY&Xk&|u*n1}OUO9k>A&T4uUP`U^P)d%E80 z>m{s(zvo(E!$B#>Fd&n%y>jsio)#1qlzCM$d%51m*j#sC=pSkidH2@P%EIR=;l3R> zf&@~`iKo(JZJIOQ!{OJ2EX$ z;TZ!fJcg;pjt5`7*SS1Aal-9&%+nhCQIova;tFkINoM9F;}Xo1FJ4&^`rplLLgFjx z`s?RnSRGPdkWZ-Jtb^~dMYC96&GSzXYfEl(w4Uo84<0XIlF^IzzG}ls6le#^VZGNN zb|vaSAF+3o@IOXAeu{XAXkK@YFO{)_hh@<b+Y+TESN&Rpqj&BAELauy!E?)Awwya`m?Yy4 zQ@HJ(cQ>(O51$WW&h%B;@DGX_EoIzduDg&AiO>+e`ykL(z9P4HL;QMvOEa?NT5o0< zqe+teC+Qt2HBR-m@dIX$EQ) z>f=gzrL`);dO;xuu5YB~gy7T^Rra^E4o0ZuUoR*MtLME#>nS9*OOU#$5ZRu@lUJp( zROIBNv0H{G@c;ut<9_U$;4!p!cqS@Z>vsCtfpuuxf%;U@$+M{^+%f~zdAoNl)dKSF zj5Ozju*aOpiAT*Kkp`wTJg#gxl3YX=Hh4E-*2ZnhlJcdU@wD2dfaM@{!F6cJeU>L< zkWD`e%dm2FVrTs`WCp%M(Cs4u@H1@nJWeJG9me6KiiKRS(sI-dvhX#oh|o?fMgX)U zV8EZsMaFhzqIl(JkQxt03UzX;Iy5EdNAa|45%WqDJZLi|`1w zY=0}ymMPt$i}&z6cGgc-KoLk(%a3TTjvQqTCoZzGDfy zxP!>VYKL}<5!thc-AAHzTXpr`o@;+lIpk|3I=P5Hm(5nxu_(~P+I9YEQJ?P3(;jek zvF>g72d^&*$!LYVz%oS0o#aTbyUg;)FXL&za!&DLCYyYkLXWym+4&c@?YS3hdf<^5 z9xkhU!9yCYjCb5=B2MrJb(1rw97_WPm~jNj2GHb*##-DnY+0_I$}W{ip2vIYdXo@r>^H?S~G~zNJ}*_-&Tj8kP^E16JCt z$cm~}wDs>d;GZs@OAuc(7+ni;7Dwx0I}5SH+JN&x{U$!PeG+lJEXfnAtOx3~1IOE( zElVkta^3=6&xf>uWW|#?&i% z&*#Nh2Y`vrB=XB8{5dXR{HKY|(68;k^_)S1I7``8du*tk*Cgt~J>vMN5vJ^M@w+kQ zNf5uc$;Kz$#=oJ7KZeQUrY5g~;R+i=MSdN!l_!?5xyy@f-^O!wfIYJ&0u0_ABHi%=)V0C#5NG`_g$!6Z$F8ui#5l>rkFy z^<<|o4&oKi(Z+piDy|9d#x^1FzZwMos@eKCi_HDaB6avxAo_FZu7W7*+%t#9Z3^lX zDRNJW^x#jHr*KE+?^=;*P4xC;9xg6W@uOPezG>n9LI15P*I0)Aea^0leWLrOL|^Q% zc&n6KZ=1ywbVKv`RUMk3mx%)7=3%54*1AO1Umh$tPerff8@L@= z>sqwFw?%~CrcNI$2yU6Oejd1^6Uq6e@4-TRMWL8?sg((-b@(wKk1$D#kcPt#4LU95 zotIjjrz9>>FiZ)F3gDkOp_k!b z-A0VatOt$`=TVTaLq(=tYUGXpCQ?k~{%XYQZl(V=py@S4}AJG9@h z#R#zp4=!}?jF$+flt-tg2%XS)jhM@OBi9tRBG{JhZ&)VuKb}c!H zoff?DEHJq6#XziDu441h`N9}BEy(2=Qf=Ao@dFEi6$ zP1`e0O_UCW#IVk_y_xAX+YKnbcDOLtX++GQ9hPgb%uyaG+x4Jd;k>a~EcFz`hiduD- zpf2c?-}|7_lz+E%cpo2K>lhEqgIj0nNt=khSdF*&+>TY$SL~dT3!h*X^g4X-I6mt; z?1dRvwtscdwj4rq{hM* z??Ub^#U(=cH*LJF+0SP_hDV?QE80V8Mw|;-N2vhn@x?DsJtX_ps`Y=>SStl zT~AP3lZFZU=nU}c8`NXh<^UFXJ9?mebrKMHoMpt@Sr8}q5syLl%L#3>cM5+o6m~HQ z{Sy9bCJmjSR(75`A#2Zv>)!joGVo6ly%06;Wfci{* z_0!^EiLXhaTt$V~5IgDpCZJo6Pyv1OzZ-DC9VB3FBh#^AZBj{%J9Z&HQ$a){LsTu^ zrBKB?PZY;Ilgz+-+j}+EHqz~Nxfjxz3B&X~a&mW3O`A7DoJwh3W z%WWYo3LRnpGQhM=;WaxcbT27BWMf(=Hg3&m`0}|vKNg<_Nxd{!|9p8zPaStw4j0RF z0(-1&&5)K^f5Q$AZ^emACe*#WRH_6~KKt)4579VVjluSp56gsLi)+x690M@4TOJ2$ zx*SDh*LFGmYUqu;fQQdI(mZN)ReW*_3)bARWVD``?Z*%^olWZyfqx~|EdstgW!RLO z6_BogRjS4Iyn>HTu0^cU0d?{F|2Iq*aRA($2GV~iZw!P_rQmh2u0l^V24N186yG2Q z6y72l6-Tj{OxSGd2lyDvhPG)hrS=w16JI^MW^){5>t;`D58E}MSp`T?1RyBemzVIz z-E<^?KHALd5HrGf1UA2jsYeF@9`YX-%|%osfM{!6QUhj9ZNY#Ul>~cFGSyv)buKSN zfRyS@7Y$fZwuUW`7Oq3(hFX52Fy#dF15EK79^`E z?0c;EeNa91k4wV;_<{C3q)v(fbDm4n;EBMsyUSnB$8Nf$_@*D2Ef&A=Sml58Qw`z% z{0_3eb_b$K)!%pmHNVY)ieGQw+w&UnH|DkC)z78?;HiCWVddt2{$BsnGwauS{fqSq z-}%tgEWI1Ii!<}jLSS_&b6QYfXs?M8pZ=7w8-vR=8=;z~f&uoZeI@i&+H(rih{;97 z-E5nO$64R#+Xx+_54nGQ@kp_MxeI71GzL5N;{Yzn4I_meIzpN&kK_F%)QA%_;SE)6 z0tLDh<+n9SO8>_zb!~2=^q*-6{C_GNBY`_K7Yfb&zFxGuq$8O@{~jqd=to$>L(WWy zmr+a@n}F4J?F#tnb_4Cj9>fAj%!~%*es0Dt_QFRmA$EdVKV)f$4j|JS4c)Y7d|J>A z$w^=U)jH&}D4qajcCX{hZ>0(->UgU`QfeD_zr{aa5Pkyo+1(A`C;Xet0KrezUpA<` zg${LyMPal@*m@@2w<9--C2@^Ebzk8%nZ@1#Pj5!}`oI4SL%5;b(5vAt`$@m&MKZGJ z#o_(Sw$%CdPumO}{yX0{6$;539mw)Yp6DpZMi9t0vfdqZR=qzQ)E9@IX0{4!r1H?l#WuHSC4R7W3t|BV+O2aRhAFDp2Ru`tD>NWi$_xkaVQk2QlGJ zdXLSN$i?g1-)bkv?t5fwG|WjJecJl@3v=b5l@T9;(OsgJSk`>d_M%?7>ALqJbCPJC zBfFbUwWQKI6FLXaMz-&6 z`>^!lQ1#RJmrg!Hnz&mL&g_-McqPVnCQ8OKGe{Pfoe*Ifml?mpXtzja?!=1fP}CG2 zd^cFEvooihilN`4g$~1}!hvfWtQ)pp+OewZ;=Mh)*Y=qr2%yo_%z8Qe6*rV*g8TW< zT=vH@@8xmF!|CC-#c!na>SLMmo)on>PKMq)og3=HuvGi<`QrE)RefB0jhgsUT#;F1 zTZx4o`-mSYx%IST*OkMeUUyc->-{RnI5qr>lQi|IALe(M?!4q>DN-laEXz-=+VzUO zV2(>KzL;d!rOxgZ!>Z;6qur0s-lS%-tt1uD4r59=-5i~3@`%QX3?bdb!)~DCBbys+ ztq@L+qdFnC4Wv-;Jk$$%7L@&FC~ZYqoMFa}()XEziyU(T*g#FFH?Cy&>%qa!%V z&WRE$o24KkC><+J?%xum6Dl)l zE|faEQBv&gpQ!&|t33ICqPKVAq{w#4r2pym{I>+dd0-#xk``r>fk5eEQ@v3kD7p?Ep&P}PtFC%4+xI_rydZ>lg)%;f zB|d&Z z1}z^uC;gPJdo#xlyB#zu6C>anU(DuT0YwNoP1bIJz;=5?H9Nn0CIR8wuHoE}6UXd5 zo;CEFt+pv!7reZik#DV{x)kY4=M?rzb+7J$gv%nrv1&slR{a?9Dz&ia>X7O@rMf_g zx9@5NN>MnuB`Bmq#UP~I2Py94IT5VMQ9C{JVA*@m<5|~B zdiS6f{KX`O5Y?UeEjbOKCJE7SQ|=|6oq0H>jV{e%SG#J=iRGjQ4D;Ht^qQ0W^y3wE zywvjKmsHN!5clski%rX7O;HbPY-zl?s9vf-?EBE<5S>Aol_E4VPh&H#NrGk~FIb0; zl@3e_)RNS-hzS;QE)I{5iCWkn%eHth5!lz+Uklr=o`Bm*JwDow-kWybc*atCEbF>J zlDf?S%4O^!6wZalm?XJSAZ?*;eWzuOh#IV<$~;tHrDcc>nQLduP_ zMGtc97fn}FACEM;`?z_^e{As0rKnb{Zb#a1yI;p0v_MwZrLw{s&K#Kudn0mhbTQKv z3Odu3>Lczht$9wy1rV&j=>hc=wXYd1-D=!bm0>u_Ux_Et6rTfej@oqerrG*blk-<6 z2asjn53i_>vUDUP4C^EvWx74RU2jtB$ZYH#lkh=orQ+E=1i>QmT-nPP_xl$P)3k(_ zJCI%o*3wQhRI)B_c07>Q9o^w)n!^>7^}tiRe(_-g%)rL|#dL;1dM711kkZ9dkB@GZ zNBRQtyuE);vY<+GpG-@%;9b)9Zf?8aicq>1Q(IhniMT~-7lYUCBM1Z?n6GFUGqvh^jo>ExB4<o6lB~W(AR!8i8FQqiy@U+2j z$(1a1TX!|}E{PF|DmJ6SRn{m$n{a-=42|)QjC|#@9+VCADcPx0svEXX zryd2rs;A}Jo$c>1=z^oWYT|}5tX+?-CvN54@EK2R#!`&yU!S7r(z#|jW~x48D%THZ z6(r;?Kxq{ub{9+DSP~1m;D63&LP!Wf@Tv(dc{P$=&;I@Wf|Cbc1qm|jI&5+i*^SlB zlsn~R8O?RBu*Fe8*V50rShuM7Mz3n)_Mr#Dp-D?6;BKEu-Yljto|(Rh-9H}GRoCR> z)L0 z!e}GKee{?m8BXl86CA=L&Qq61(%mzPlT5I6ZXaaUz1*?CJ7`4GzmJ1*9pX)4dRX$@ zl22_a>ogL-GDewBLAFlpAr>oREW_`K9xGLy*{U*Z{)DW8&? z$l`-b8OLqZ;%EGNEGOBSDued=@ZyCGmH1(SoN~Q@AhFeAF+$|$)jHQFiofisl>3m7 z1&=jRh%ifgo@V=CkzJI*P~TXvb}!WPM38y31vk^Gv)sGnP!_k?QiCcB!IVGu+th@& z_=!;kr&NfXVmvtp5qrw-@Q?)^o-;bAeXJKYom~zPOM=SGXloM^se{;x{L)))y(XO! zeB-{CW9?rg$^$Rz*;c)he|kqACN+q^xe4f|zd>Bx5W&9*0gT`vuA4#`qv!z-yC;fDE@#Kc^BBK z`?+&B@TkB4A~(7Xe(P^jvOh&`&ozZ+2HlpKBLy6OT_i;Q_h?Dfo}}3(sI^Ur<}IRw zD~PR)F5pkPfUHtcQe%3p%4+J~d-_Z2LSh%=X!pC}x6lVvq-#$Upb|2)wd71y1srP_ zj@kEHqgkU#jW>cj9~rGbB_k>O>I#^Woa@k<4dBq`4ds2%-Uc55wbemw*fo{F_R;^{ z09&*SM63UUCjH0BzdwncG@1xRV!sa?`{>)zm{iTI?!Ug#X%=J^)~|rF)CV)AvBlX~ zceU6r2hvQXF-UvFy@l5D;j?h0b1Dcvec7@H9v8Hj1e!x{RY-4T9FT3>zxZqEK3&(h zY<;j&LJb(_?0K^3#piA->Gtc;y@l@6XqrgCFf9XH{#R9(7s7DI@kR|?6Vuj|) zZ^MQDJE8%5EIn_A)7PkYYWHeTW!c$tCUPf@`?@Kv+7J>{z!RAb*d5$Tx~BoHoi0QT%AL-SoY_1>0_~ex2Y+H*<8%jUu4g*{@=kvm)%;KB%R_NyBEvsN z%5F&8ZFG9zx4Hi`b~AppvVy4I_5V?}nhlkj6f$kuCR$k1=(r1S})4@G`cxgYFm0yqTvp74{9dD8#tvxCkwIuV!QSHS9$+S}3eT z&j7qf4Iem)q`=_wz|gx8$T*TD=IlrSkZjtQ@Yw+nO7Wk4mcCiJJ9)(ol;io!{COW5 zOKx$S%f8B-i;0VbmcJEj*t7VW9Y5*dciS63$<~n2*DsZ2#u5K!^g2gSbTArTfmE+e zJ^Ehvh;o%{A?4)D#mu64m~Z(UNptz+!`PSdJ7Ur7*??wky@o{xU|9oJ)U^&^r(O3EfZ>c3&To^ zJvxt+&NBvG)2M=Kd1)u}vhQ)jRw;8EP-{LL-HaA`z6nt=o%L2b+rmxoNto3TBF z*<4&su(HF1#1L*0p|J0^*l8L)Rcj|Z3>({DhL$p{eH#51c>DG_3OYHcN62d`)$Q9( zr(Rubyn~mIMs?RP zAxUX}Izwv4Jj%M#^)E`%a!Tmz{AguX|$9E|EBsA?W5kowp8o%;|L=jl8Ai z@>EYmpPtibs1Tni$u!sM(VM7eC-awM*b6)+cBb4(2o8?B0cZJOFjK|O*xn$YSN6z_ z*XURR+hYcT^a)d28sfg7RACy#%Tc3sNCULrBd8(TgY--v5)}v$SJM;jWzp6nEO2vSaSg+4!ZmBU*!|H?mKTj)H87+x z%n$EW&Kcfg3y+lz?d+IAv!mRcCs&=5kM?dyvX>BXN2-5N2zh+&T0k5FHY_4VZyA+y z|Drng2;MvcfyhCJZFVSP!?Ia_-y&_VL-)`!H_XgfE-$q=Q=KoNnHKJ`qfLkJ;>$yF zc=sM<@nU3|^5jiiSc~naKzO62gehjj7CpU>MyR)3TN9jQY`vT4dPifS7Gro{hI~z> zgTu(O{w$j~TXg!F^wewJ{RD4U9Jrs@ws+Ul7qYSKdnGHWCD&d=vgbv3@4iwx>S(Pa zMK@ZbapoywKy1S1zFAin-;t(O=LfW%f^%KcrKPPzeh+szxJa<=-OW2;&RbJvrn(5* zIoT&eO@qV6|XHp625reH(soKUvW!pVjKnG8LF#e=g2w}x8MxcZ$b8AN1rsMx{ZL)oixX zUZP^1GGN)^(YfD})y61z1lZL&hJo9Y)bPu?zbzYt9jgWNYac_pVW1`%$4V< zFhO!gIIJBl{xrNx@p1D5YyZ05w%ukd6{8Hg3uJJQ86J{7hSYfWd2xp7SFxh?RqsM? zGi~>(yqrCLMzVCf1KX2rFx-`3hZ|1=q(bVi1Qj-^!=^)Gn;#{4OEB8%_$34pw~JG< zI_=Dwxv0rHRAtfodg#)XB7Yr%_zI=reoW)(sTTjdu%S$G={769V+&ob0^B&dX8RdO zX=g>Il$?|7X!1xE&VR`V{>y!(KKQ5Wc}dQ1)Q z1va8Vze#q_BpKc1L^M3}jJBcBcRO0*$JJC}dZTW_itqb+d^L2HbwwYzlWaYFDqbG?r|f zR31MH5bp0cmya>ot#YgY7L&mWIl?`bT{{Q`jXIVR2a%J~F{dcnb~$a;)8 zq!q1o4HsFJ^}%rEz;JLmN*?fK@fe+=b|vLyVDV zVoYHafIzB{5Ad=Lbw}qS7yyFYK|bffj;G}doZ2!SuOdm%zG}J{r|OLkk|^VEGm-TA z2csqIM?T^5{A}*do=E&{AsI@kKf_Skd`=va!!E$%pmw4C6zu%ojOj0q-{k+>>sH9E z`XIVEFfqAhpi63xp1jtJz9WhEf-G83?m*xMyeTP{Y3HFmEaoIUr*6K|^Jw<%cu?bF z27iP%Po`F@y0&)h(SxHpZSu}fap8V^WhT!Qnz|RVJ^0V(^OHtAE=M~*x_*9^%$#dc z5F3VH#j)1_MYa^fz2>@Gk)DglwIEO&pFWvnQp@hjpr6V4+%@)?groArUe9sIgK7pY za4f206xy=0nVzymmt6~{5Hb~TCY9{zAz-anO8Yul$D7r@JDc95!oIrf_TWyF>0nha zGS|J3dXWF{g;J8Le3Z7g3CD={rK85LGLDLDtG+mFc3(FIp*EtI;$+N5dm4G}#0rvU zJR#~s?J*S|=#j|r9bBLr@-hGV=#&XAJ_f;p^HB*(9aqzy!!<4#u~jWmJ7;EVa@(+z#|0lfcz9Z| zDmH3lc`9-^^}z(&0q6bg>OC%odytA#_*XD5M7#*_=y;e!g#TZR~L+J+5ot z!nh`}yG62^E&iooX?ym1rVP`4$H`k#ej<>d17-Y_)9H>y580 zoYJx!``5%?nf38C(1pEeQd}gY&0O{p9OS-2{SvUOVC=zFA&>gn(K z{0CME#Yu@_X>D3A3!V7R@Md8)rR|HBihPZ!NR+}_lp`o{{d%mtVHL8>RdpNF-fQJ_ zN+Wz}Nw>J5(6M2ckWkCxKmkj5>e3XK@{{(X^rboXZn&L(PJ4QRK4&S8F!W;#af_ea zX4U+W&dcr5-<9F=qqS`!Hx#WtE}N)zZwq0SlA0k7wEIFfYLaP=Y`kp*PK6kTB_TR4 z!2{!nai7EH_{ZUkvutCXtM!{Upt)>H$FuDV9AsMK`921hJ$3p{oRDLFXxEU&a~nWI zx>d0~IRF}B0?<&FPjxxbq7$NOIWju4&HJqI>kPK1S8iS4s|aefZ|6X>U>vXr%f7W$ zFnF&B82}q^u7s@R!17bsC#Y44s&~g_gEliUM2wlS{vMcoRF0kbqixt3Ezzr4M7v`U3rlwq+%q84yv*t z5m?a!CMc{N24@WS<-(_!kYjBeI7@XhqN7_IMVXyEwpm4g@q41sMp_4HZqxH>{@*Q_ zp9&IjE1r^qDB8mVO_K#6Z3U9jFuhwNa`!`{~VntLvP)S8c> zLn(2EZV*A(H@Q6UpuqdxaDG67lvyQA6pN6Jqq%Kjt6pu>OWrCtX)&T+xXsWS&K5A{ zM0C0R&69y*40&hW^z}Y{I|6CaI^NY#+LT6LEH};qxliWXNtA*1Md`AW1EH8<Ag9=RP*e;5Trob4=Kt}LT}ggh2mU@iA@@Zw!`Q^5nQXDMXZ z^e+c|D~jC@;KL+m&n$db)Z!;ZEF^kZ^iwh+=KihldB`@ao^07{&Xsz=mcp=Mi)zug zAyYblN(6NPdTh3_7-Bs9T%d&il`wl z4j{x6=Uu97slAG$zYZ-&laQP@p}1@AGGMa9v-E-(MD@G{Rp>v4MkG5w&FPd+RBV!{$=xPdU6|!;HqWr z;ChDUlE=k@{?c4IwO8$f4O>faXJ>4}GA0AjCEVm${3xz~VK`h0%I=4Gdd%0gEvDue zRi3^%gwahnj-82{S(qGi=7emyRH4g+;?K}*=8w?VK-#Y{n8;e$=v3Sh6zZ&W%aoNI z-gv&vqlfKtZ!4${V4knS>{rXm?#+&(y-{))yR%@{5r4A0 zb0_rHUVIlk^ApH6QRp+#GBDdl+|(`5QU4D@d>|olZ`pX9W!wU{PxJ2&`=<(foaAgJ zDSe8RA<*LByEF2tkQCmQ4ZAz-3LcNBz=~Y??1bGPEpy{!nwb$L6*ri%itfHThsVcfu%pZ3<=RQ+y@!!H*@J6sUX@3;BZ=KnD_w_< zsu~WE+ zYFjCt-{h!dmvYHt&*}HKyUk@4BpKF%Wt?d41c~LmNWYUnkbWFmAREN;&Bm0b+1AS! zKWSl#>^&33Ab;G{n`D9rvd<5&yIFL^E$`K#+qswBh5|w_^yPOV8d~AYv#Jeusx0LcD*g%AJXn3H1ef&_u0N31q!mH?c*!Wpep3QnZ5cseQ8ss{P#}; z<}Z4O-#X7fj^{6f<&;fchiE}lkgdo?Hn){~gHFh>8N=hX*GQI1H2Qftrc_fRWqN?k!XKOCWc56s&0soiw+tN(W(I& z$tPuJo{`SMj#&Ex)--3w)G^ub7sqh$U7Trs5NsFQ2VQjp9VR@QaC~urwyac07%bR_ zaF8MPGU_3s19+#b-Jfnke^N@Tk)La+mSkyCi;CIT)U^lFRK%!ulgNt-dqYSLT1VlK zbNW2&?9?*c_e-4ANRZmjJs86~Ic0?{CRazA9?t%-e*iL$P zEuo=4Po2uwZMheH)XZDdnMXKOw3nQUo%kW^7(!%qw%63H{YLIeeh52qTQki2rKKrU zho<&X^==t`GurBsY|^IuVM|_Cr;IU@Z7UwjRrcF@v_b9X1UqL_%1b$D5rpu&@iBN`xhC5=?UlxTL`eR%)Nwz{9{-Ft{`9yunVj~#aMT^RJ0Nk9jJf@p zNc$Hn509Ip=RuKtx#gCUsc`y>BC1G8)h7x9tJ5FX&`f&^-hBgI#h-1{Ub$EX5>Mlb z5q%*3yb_s^d&+O=(RSG+x1IgixiWWbWiw)o;>J4UGL9w18PJ~1al8ch`Jm`RIVXS$ zxe*oBNL|eKF6>4)yQ%{u)Lj{eFAzQk#70XH->pNQ3`iCo(A1JxnB=y4ayR2B!A=2^ zgM%)-+Bmi~+Uc-0dIyr6?P(-)1m}cqZHC`zkSFEmBAg)q*ESPa@AA(luF@ z8*$2z2L9#V8LPe%4YGauDKR9x!3x->4?>8)d*Gh_xeROdzY|C z3_Q#(IVy27hNqoD4_)tdf5h;fTwSbE*ISZjD$BQo4`>-WYfh(>wa5vk5ABl@^1L#Q zAW>T~g3CT>ei+>RqUWG1|6!6=RIG&vr54$&Sk!>+tvls7v2>p=gIVWv98e-FTwq%XpkCk+I?-~mWhAHc zjwD{fZW4tt@dsqa-q4^R%j{r7WyhShMjgj7vW9oH8aq33YL-jath0LmWgUL1Vg8Q7 z`6w$~+MLPt3$Ta0QM-g2bf$_%-t9OATN1*VL8;o{?N#fl!Q1=LOzeRI>1AeGC~YeW z8H~B>3~+t9WLWu%Jnq;QyaeWdjrq}ap*(^FF#0lwZBYL|J1YcN7vXA7oDLX0oP~d> z*q@rG+deK;Zt28wAc+;zv^CHXcVqwnMK9N(M-kZ84VdVYO+Dyg0O>~H4>p0%N<_=BBxm<>L#VA7?VA$^ zR8v&%qQUwIbPfYalGm|kTzEb_I1>6s#PHP?_~bMPX18jK)jXp+5EkSBRn|5`j9o2@+S<##8_hDB#L54MzWL1R z{gK&{I0uq;EsGL02MNz?4Y9!;nUa%kgXhmMdc2s&3TrkslVgzw-L{{qN1j^dLDWg1 zWWAx)Cm*-O-K&D>>4ClcsFLNl*fB^h*dM8#41yp~AD@D+|VE0DG12^gKt2;8VEW2M~;IBw2uaCYi<1B8{^L2;05R za^oO*?!Uto!nBpwzINKH(coROH+dzVp226yq5pUUGau?WC=j&PxV@a%Hsd^!Cnf5} zF-C>8dYgfk*=eC_qmDQHF#%b&9IAea^X%`H?iycVGQ^O~<0Hw&h4-$+8l|pUtG$yp zd{WVE>3?T*i{JZeIF6Jd!lx+=ws3(C_F$~@2k#GGpTl9msZZrkvTCyJ#zgP zXOL;q9V(0-ZsS%^N%6Jh+v5^|tBp+3`g*mCt*p{7jro-vez!Llb!{OIze$2vc?&$y zkLKUTCmYEN_#ERTY!Sr$>k2;EauGJkPIlx2k$^J53Dp7#N@Lr~bpEu}A4Wz{Vh_y;eHv5NKA{JnJaGI3&O!!A2(S!5?d(Gg#SUn` z0Zj3O>~L&p0#J>jT5%-2V2%nTA{xa&jXE0O$Iruw4+gx5RNOsawbP{mV@b9U?bs8C zc!^9#ycOfXP=eq%^%l%1N{nYIuVr#UA6>=Pqu+9J>yTdAjdxK^D7ou zQ70yB$pbJ?wmZK5YzL&j6@tTNZbrx2(?xC$De?C%3n`x_b$$yb`>}OSYsTf03#1@< zbDX5)l%ATTKX`**y^nv#BPjG!uR#%erM*tksz1?vw!rqH7ftGwhDL9&{s`qNVe`GC zSTr>>rI*EqC6D!hkCrxIRsh1=GFyXQ-mn07xCp4R0ey2jWguX62H1zrM6lUhGysBy zrV%sGfaJ>n%M-Raso)MO|^HyIBg8$vn2(9&Cn7WS%U zR2exRom2@A>=M6mS&_1x=V*2?R0HRc25;C?1LTs^3Sd_dz)}!OE;DX~58JlDKhRTP zeaTu86$2gc(HaogDT8egh?Oc$Mr`XP%{hzZZw+BP2{tMYd`rZ!?0Ux3>_W+B?ld8F zm6+xGguvzn`jzjS>bh;F82!}m|G-Mn^n%BEj&QyZ<&#yW{=V3EB{Bf*RaAqqFQ3b- z+q4l!5;Zv!I%pmyplrI11}d-T8*CLHz$5uPybDuein9Twhk-Y`tGVVe%&z3Ius+RQSfdQ ze4aq`F_xvoy9i%g6I_SpU&yTuH!az%L!-X2%eC2QV{s(m!VPdVG6cKI?G%Gp9b&@F z8BW+UOfHG>1+gM8$PHG49d26r)4_g4)_Q>HO^dk6mn*g!D#fQ^&>v~@Wm|9 zs&uoB_@(bG!LSIrT$JxT5ScWkEUj)ozGBiZ9?Ne@klU!Q(s0t1w1(|BH5ZV_%atb< z)BCm%_^9nZ!>M!77w`(SNZSs&F@@{1M=R{PogNu{u9Pc-&Pba{)I?XR8Fb&* zq>v7shq0~EF>jOlmZta~EA~BxO(9C6jeU{ON-VNxIWBNt;#$A=6nwzc)*b7_-LSUE z1WNOY(7hA21=u9X9z;X(Jz$vvMK(AFR`K&elGTM4_)KdoS`%`(K{kk(eJThfrf{S# z1_eL^p9Cxg{vG&e%XwhnDk6}UI8&jjUSOs7Bk|EF{n+IRIqWfjg3=uX){_hu76b5; z>PGy01ubw{LiZ}z{CF~=FAcs@^yfSJ6l2el{`5*#L4$BQgBYfk;1^4Tyn~YG;wpTJ zMd*v^gs+KEqCQQDF1!8-Ja73;_evRISBu`x_lBZ-mSxXndEDwvX*WN+PMX-z(-ikW zlFZ#KIrsOK-vI#RcjQrg#q|5jAAGBS|M>aS-OVq}&tMtv{{h-;+scaC;+X&XP~XOj zQM#y<8n;8tK08Iq>0BR6w}7(C@bxV*kpA-hef^yu;-%t--jRqdPD;h!E$8fZX#Jjy zCDM(n{+!C+somF=Lp@yf9xxv*x3@Y+T(iXUhehNt0J+E@Z&-kgil>j7UWn7iG(wUU}XwZeWRma<( zD|ZmlokmH% ze+qtEQlSW>)`Ti46dZF{PWfWtjISk>Zh5~#VQnXz+9JQL!MU|nTMaVzXKDR@?%J8?H2mi(&V+E+#3s ziV$!6to_l?CC+}lkDYh-nDcuF?x{TN#>yjb0RKVUF1;Td8Tjb+ojKzh>35omDeZy( z(z7vEkslv}_;{sXv4}s=fkU7*DxP4%b!VRAd{m_Eq}4`#xqtxy{J-yDz;8*Q?{^Im zJ@>jaKcEP%2FYqZMi7J&h?0J96^;Kv41RPp`=e>~o8`7M@PkpxgK@=H<3iG$GH0b$ z=iIIFkiO;5ZT`QQQ35;+&*g-{G4LdCL{tZHYNaZ=YjNP21Y7YSML>D^e*nN~LX z$N!#Fxn&dPGj8o4$B_|o2KGDkjf&tfiVaN%v?nS^WO#oW^Z%(}@ke2|@p()69GOu? za^hi^?UySLhd+3KfF4Q4`shl~?}Y>Yxc|?3e0=qHel||x6)7lVMS8teqxd>@n+rsE za!AotIKXP;z1SUIj@>&<(l zZRYn+u}zMP>{k$12-xh8yb2y&mi*O~3nZI)Xnq#PzvQ zIm61RNGy)0+)&#&cwcS=pl6;+L7;rv<|Bm6l`4~45DsJEBVlro>CG7gG zNuh@?&blrmqJ>mRSqEePu)#i?oj)6R-24K$eOn!Y@kigDoE#)V@R0JlVk-lgKt!ex z%_&-vq^)uLHvi(!zX#oAu1zk!J(RYP3|*Wy&V|gULRLtm0(Z-A-vW)}FEz}!tBn2% z?0nftKY!yjQ2#E*zQ~s5BvoQwbSj&|V~e2Yi*dVBQ^Pe(hmdLZzTXbbeZHKS|SYV@OO-vjpe`_w>??V)s`mjcrl!CE}%V3 zi4dU0yq&b|ucMnk8_V1LoHxD$0Vz9_DI*DN#TzvwtF3tl81-Fpo%rZNafC?Q ztem5ri2lwhPJWVZH95wN)3#qaX8(M0aI;mvy;2*e2$Ax%eQTV8_n*5-eSTvo2+i^lCq4H2&Y()yN4k@1y`<{$#PE38rb-uH9Sym`@n?38HlftN>Ze{Pf zo}Jv#xA6jrR~fCYz^*)})6lK47HQm$D6yClA<_2~JrFQ{zLKvVRuPltZA%KZoma}) zXxIG%5Qv{_L42HtEIUEd&coMtL=vXRia$sjPtwC7zhKM{J1dtOd0aoP$dZrF!-Xk6 zem}|Jb_0>qRQOwYUNZV*_AyS|A^yO?1ct%*6HlC)0xTrF>+H_XxVuw!SDo4Y z^2aN=$t~x;=iKu?&v~B%(SJ^ZEC%uAxIzLXlC5@-ByMx7hd>D-H`_9^>ozn0Q84Wv z=NZk>3YVgQkpsk$bx6QvqoK?;9vzuA5F;agW)PxJC-F!?5{RkAv%gh|_x1J@$b#RX zsWfiJgn&$yLt8YvLqPY=p7z>QWWpHW zF2GlVqz&;j2|CbKAfvgG1qlXciu8zYYT-plo6emCDLi9F$UGsPwJR=5I!K)P`NDqm z{1v+;6}>3Go}e_ym-4S(-uf zbgLA9E{a)59|AJFQ#?-pf+c@u)qQ`jTl)7vsUgGZkj#9b7kwkRt^pRb zCi~AC;{NVX^ouw6*%&|Eq!`lfhexVZtT~f((2l0ky7wB9J96Q34|kg4f1SDhF%Er= zEkRTC7gN&T;X>c~w|}sU{wwbM8{hpyU)W#6QNHCd2u}E|9qB(gKEVlp83X!-4Eb9+ z%eOi) zB?t&XKs=t6Z0wGtvQ?{+>)wF!#rys^HKh#$hZJA0Qs4?gZc3z?36MO&`BWk{#MWd= ziWHt6pwSYK1)IMvD<7(C3rib%VM%l@ZBTjM+UR{;$VVK-1Qk|Qm_Q=G z?78~8-b|pLl(I=|_E@H4s=+n2&D;#4RURY3_#ErO>YBt%QtyRl#z8Hs&Y zvcKtX&)@+%|JVntiUJ;49d6BWmNfb}sL(7?12kikmpMVok2~a}X=k%qe^W_=-a#PP zg8Z(dqrC671|pU<9=YgRtfLSp+J#lbowv;KO`P8YS!?*@?re+)&-o`LMCWVu#XsX2 zF#>=AMbj@}NN2V~m=@eAdi^xW2bB2^(aY(sRq@FJ{7lQp=A4o5zWzEnT|_m!=eA{a zWz<`^|KZ0nNcSv8E}|%`wjI4ukIyC3jRoH61-hX&hxczcrf8o$-dqve{E>u4Kz8P9>m>i$a3gSUe~S@6#)_Y| zx{>;@H%%k3fn$LFlTNjJRY(v1s^A>HEOGZ}vBHFL#0r@Y)yJ*W@&TM}2G-?zPjTh@ zyg1txVuz#Ia&Hf8J-DI3^{!u_s~Jo-WMk3QQ4vjKXZtj{{Su~AaT7EkjB!Rv0$O{X zH#=`{X+|8?w6#tMV^{rNQn2SO8`fqReg z#bM>ruQ!Zw3pR51j21nM4^%mDGuiJBWx`bXQlR6>pe8CBy`iv}?yZ-F;dl7Uz>h@J zbGhiJ1o5yA6oWACU||+roM6H-0=roQw$7KMYr*1uo_F(=+YM# zdQ1Dt;@CML!JPi<`^#0Cm5clQBpScpE04?Hvbd(iRij^85*t;>GgGzB@9EK`4+*i;_vM{Y`Z%Vh6iTc-BC2f8~y#i>2 zx3942lP%@L{zF#W<#&&c^dRxVi!lMPI*uw`I3qV9kqD!A> zobFK5Ud-z*HY4i!WPE9P8n28BK@`De#xMIpqR4!}BG8VoVUKO!>Pguc$>10*%JcgI z<$bsEY|-K&oXhzEoaLqL##hm#WQ>E2M-7%i_;Y!sRrwR~Rj)lAuEuLUMOPeeU5Gx1 z*A``qqor799D@bY(L>GkQ4e%fcILXLgFi-=CfUo{JGU!vNxAe?yFc=zYS4*GFHN^h zZ?g5cJ(TK6rryJne`x@jkG036G?z!?y=>aLmF%C^&0MGh`xE=oX6vgcU3V1@FRYw3 z^9*W|fCMb)`0C8d)toq2%fGZnw6&I}^L7FfAh>w{tZzl1Qf-W78`0XQ1azq1Hx zgt0B=u4`Y)V&y?9hsMU-P~%T)T+0A z$0_PgvpDME&ymSc`AV`&K`4YcU+yQ}^t4p%~tl5R7tn1mz<7WCsT znb{JHvs-JvWvz{-r}W^KbiAt221Ub9lsvq$n~dZWInpE{T)|%Z=6+O5Ib)^|pgH$w zGZ2D;?H-b#dwtL7qVc?WqC@T6ydS%j{!)OOk=}Kql7L4;PKUdX3zE*zI}y7KhkRJh z$jO+KVJWRf3NRs?;QZH~_Dh%Dq-sm^Z(Y#BAKwGLf%MnRz25_!F$@aKKh=ZZ*X2S) z8_ixeSi3^z&bVu1nA^T{a{EbaePxlY9u_|C3%=%kVpG>ye#qQbuYs4k*E}+@LDW=+ z3k&ttR4FJC7Z=D{SV;?`w>jG6xOGF87z))fsjEKGhN>%^RXW7{HXGdjB(&gA)5Yn6 z2}qa=La^ud+8!t@0qm`)!Qs68mN%*P^c3|A=%WJ~I!|tHKOULj5ezB@a=XC)qDs!6 z=jf|lp*^E@V?O#tuQUu(45RT=R3z_92P10P7?zDw$5oV)3ST42tYQ~b&jz_gfrZ$2 zN16loOL}=kj%MqI4IB?jru`_fS#Q0Toz{nA&g!yuIU*u|GB;~hoDW^X6tYpG*sY4S!P6S06y>Bn;1 ztM@KMF1dGgtQ?DK(bSS*!M05_A=t`yocH-!_>>#T6W_TP*kV@Kx{>EGW`gsaOI*i* zS8Q@5i+7Is9y8hlwa=zvpN+5j<`@`?Hsx=ap0Ku%8*CICKE?1@X(8m^Wo-U;XD2**598g$TgVmd zrn*JXshd@u~tA1;EgfSlC| z^28b@r~(|;PZps(>{&}+tu_*yZ%m%y!0ZR)G($C{r!vRmk5BU0I`LQwdBgC~-m{?Mzcr$y6PC@{g5nK!v9jsa$A5N=o zyuAi{q&EI|RUyeUwprqIz;T5>LFG5>?T+%5X|SYu>COYUtiUu<=g}0h{2aE=RkA5L z!WV@qyOP)=6SHZjI&wwR@6{YjH))jk0ZLl?l@_i6n$%Y}VjWf~(!~H(x5ye> zOlEy7{YAzDyRV}n#Pkk+4(q#oMZH{sa}98pp|-~jCv#p)u;lE|BxXvTpkP+(lRBAY zFj7Y!%UW4=Tg&fLv=TAYjJC;#zcOKB+&!Bn?Frl($-kk=aNExu?{=*Jm{Fut#S;^E zbDR)HCvx0`$ar`*_JBA_D4dfy;UcxL|E?PCh^H;uB&bRq%AaeVF!~8GsjEnXkS68y zW9)E-5EbW$_ncl=c)+$`FZINo@l4*ktJ1;=es`0w<<<@vo;eK_xuLeSxYv)T&fNXT zC>FK!hPDWc^tIb!%nv9*PQUI}u;D%Q%M-0HR~miqMHrtW5??#w9IW-ZCD3Q$EufVq z4Yl)%1r}r+yS)dR+k&l;WGA3JrVPb_=f&ZnyeuBDh?Bn2Goy25Ww5yuEyVCRDJrtl zwzJA8MQORRvBIx%eCw79<8V%Gry|2-GS|z<9;r}Yz7MY8Eqd3k?KMDf`U34#B1^v6 z;|3+9ndIVRnK0=BM#z*r+TdT})))AE1k|=r>E|izUY3c{;K@ghlL)R4ltSq!3-~cCuK7J z$#rHT^a7lm?WW_o&e8kOXs%ECCc&#@hgk|gy>~dkA`;<{1;V>I_xm8y6ckO~ zEiCysjqh~r-vc>}S`W}m7{1<;AJyY<=M+1yG2|3Er(NaY?U03ItEeu1kW!j3N_Hab z36HkmQwA1FX6OH9WSpO?NOH38xJN=yg;Is>%8gy$SM(Byw-E%GF*|#YJ$P=b%8wu zgwpRfX)KyGN08?QB`Zy+NUwHV@%+#;4=p9rWb<{Jkr+%A1E`8qR z4e{-lItLECzDlA&GH2}I?zAJXZ;AJUEgZ~yI6;Dos;t>wpvT|mJGI-m2g;qdc7TqW zBwIGh$n+GWNc86p@^!OiG8(`HbfLn>+2I{6W2#aGmw1c+HM3N*7kZYM|Tps{S)H{n>@Mt(otG{w_S}S40h$m1r|~mEgdF7lZmE=w}P6%hfuS8mWah+NxSRQ z!K7~%T5l@!`Wr?xdgVLh`W#EQ5#W@c60uD;y~`;{^O(vh1~)fA`L}l_%yP@mB(lJ+ z1-rA}A(rbgix@XC_lm7Ec7xEs2J~vyc=!f#S`Z{Da$ab~iSw5|wXZ_&nCF{jY#2XW zA;GBh)?#XK!4*rpo}c4U+!UT1P*@HChORU0WL`uIfA*NFVUR%*!hhM9H~JLWM#nk~#eoGi(#FlPWV3XE=yE8za$;sS!Yec23^4jzBpZ687Xppuz|tc zL7pTzR~fSuU?s8#3TER*E|BdUzCAAQeDKYo{25hm3Z1Z16%Q?XW`KnGH~>I?b2fxf z81t)qdH(s*WkeCU`C-Hn(F75A=*zCDOs+QaO_$27wS4)MfYdT+?TGUjRKR+?Wix`M zS7Zkw(cgJl_{~(x{Qe7S#CJVeb22c9=N>s&Vap9fI&S z&$r!(?T|-3Bcj~L=`_pNo%PawWeQ(HTRo~-pH$ep-ffR{3}8 zKvN6$L}V#|tE0KXPX?(C+$=<{wN6rTTo?;#@i}-?E$JL>K9Ks7&xx7PmBiblyDZ{e zn1eG`uPt(B$Xs)lU`wfH%HSIg+o4n&jjnvLSU9!|(O!039%~oFG-ea|M=_rsae%=L zQ-+7x|yT(57s#`6`w+Y!oG?q_f@+q!w;DMB! zd#rv(D?90yp8s4I%S8A=(92CceZK|v?XCP4%XmTVyH$7_M|Yu}@{!)~zL{S2KAF1i zym4a%mvJ>ltYwRSAbk9}fy#`-hd0;LuF;$szO=v1Q(DM&fIBs}Aw6URVvX;D@7SYC zyi}%df2skCNf*Th=^<~gkMK^Y)R!06hmV`7agZF@J%xB| zS8ZJUK(HxBAy6u4M~6(deu zvF$hyqZBsIm*Xsk%c2ZkGa49#ystUNwco7@X!$>u0=chxo;K}-%^u4Hp@8uz`07a79(-(F2m^VWxno|Dl$ zJ<_?D$;y*>tcow!Gf3xMCh`@cn>FYBv#MUza&wL)>V%+ls47iN%nD2O94ar6KD2lz zSGR6EJ>ZG4UHkHrF!2M!&XSK0m}A8o#OY9!$1e0QglZ^8DTtW%1t0DhQDUXEEy_@2 zOmVvu$a_I1FzUlr1UzyQDjtM5fo1b5-aX39s-haB+PYeH=>BHits73E{*ijy8Hrhg zIHTq1jojH$8<*meC>M!ndnsF|lV*#Dpd>PT0c+;eQzy%4U(3gI1djD(R`?`HC ze#i1w7lcp==45B0{U}3a;OGhuyzEJ+;&a8JR*pQ0i8;etjfwPb%71ipS z$G7*ihcT?GZ-OZOKAoF0ejum4?ocjZWunSv{WD-klblrp1ugQK5HF z4b8t!lL&hzKz{zX9?LByRtcg+o0gtrY$EKoNVV1n>kWT*)vSX$@llA8SMq4dN7B#% z1N0n>S~a5{&sY}7_dbL2?U^m}D@Sx6DQHN`lr#Z7jq2YI2f9n8eS-3WF z_FSt6hfIS=Wg1;c2#m;#OguQ$OK9nVdW4TxVh=5R>~n~dFlmf9GCNS}rycg;N!{tV zu$r?6S@tVV^yvsaFlIdvnTtQ`O))?B*@N0FO2;5BEypF2>2@eC{K*)(9;b0 z3k@}#mL*UMT0Yf->xU+)EXBi5RKVG2O7B`3pEz}vp-#Sw*q#I4U&+KH>NGs&2lk{} zY8cRs>nw4MWvZU?pgm7Y_GY@Ai*cbzyoHUprcd;_12xutLBQ?djeV4koGq6UyjmDI zh~4rpq&)8*y=dgq#f>P&rlY;Nkf#?!M$RJDOKtBzx;0R?WfUP(t815ry$PoB6jMFg za26iPF6empwa|I49?)SQ$x|IV5$m zz+!v!N*6UR2n0sJqHVBd<4x^M(>P8Up5BA$FYlGpn71&_J70Li8VCYMLr>D`_U|f{ z#>_a@&`L?a^o(lI>5o&*ALLR=kBn;9st+)IQAkvlOQA8;xPhQWMm3=KWevW%Xk}?y zDn}M*xHcC)k*PnMl~%0k!Q7WQbm@&~JUN;w;yApgq>@5d`E`gopJF)cgE|(_!uy1C z+XJTPF4EGHu}ewUmFW~U&JZbko;(ve(R*liCOIc(|LAlgc0a0l+vCGpu5)ug1uybw zWLsV?N8&h`j}6py1FB_0C3X6hl^+*|bt>I*(qL4jf@D=49ySl<;dE5cU(@G78JilY zu)rFtkEvrw?oAdT7x3NA7Gd=c+GA{9HxFp{RY|`z#IZ+Q3>c=V8#+rj zOPlO|NQtqI7+4!ie3b8WBZ3Wk514sT)@;0@d-Rq(``*=JH#RfN10&CZ$1FkO(Dhkf zyYvtQT*yZ}UCnmvAqDTYFwI*F_Gd&y#S3+lEPmS4qcqE?2pnUUO2!9$ykn=g=`{8Es<+>(qvni7Q+`d_MsB%tC>PnWUH?Vqq**yG&5<}d^n}(#w`C+*xXN5 zVrE_7qt+`hpwxFDIv@`vXmsIh@q9exU*o224|lr#9WQu&KG`guzO27F)9OZudqnFg zE$4BSJ&@}QH&&Qf=S&&%A(C-p}kVoo>Nor-`82#91nnX>z|(?}8Bs-d?rK{Q$o zmRsIUcsO1rq8Qr_59eTB1kVY_H|{El=E^DSL~u(Rb#az1$A(prWf_6Vo*SvCmf6)0 zj^E+?^dfqa@+@-#I#=KGbmJ&5hBc#TLZLK}{2~({DW$w@CnXu^Sm=w#P9f_|Jh}Wk zME!Wn1RuTS&V00;@YNv2;vJO+Ns)W)muvYV&?jCR4&l_00`!W={^sr8Q>_j!UUo2T zP?X={gpLhlsw?rCw0jJ-PROu%p2TsOItk3bmC5w1%XH%mAUbzV%{VP;X#o(B))sfx z0N(D8>i=G(=o|b^pzHsGF#KC#`aj(NZKTfsfYQ(^nQy_8*JgK?9t#@TY88Yak50t3I=)F6WVW_G=K#|L!sV zAycJ5i>qUfep(UBot@kz?eOk!yn*0NsG0H=KLR76z=OMiYBr;E5N_jC)vc=I=y;hG#p4B)!<163 zWP=U8Liye)!g6o0tIyaP}J%y1Skj+u7`}XViIxNvnHF%Knckgzpow*kx2$mnc|E3 zfnGxDJBeEfK>rTXrBI*|+<*qq>mvvURMgz}XojuMWy4Snc+tf*^&d__wVnXSZASo2 zHQSfJyb1*z2SOM)nvn_~560wsq(o#XY(0g&TCbVz^_#|9-iOsoPC&NSdux^e43z21 ztMjg=@x2IXZ`ifW1#T;K6S6ABfu}|>0!>Bo0I~>1h3)>k>$4r1Nu8y}TY%@J9k$lN zKo!HY->?z*TEwc*9_acUtSf7`3)cl$>}PifbUOL@;FFv9pJ#GfzXPf)|?Sx}TU5N<7TwxirX4{j~E zE~e%DJ8miqI+D3MqbjzFFC-s$dMQqW+*-2pZ>5BMO1>zfo#J&q{FVV&{c1+c2SI;1 z8l9Gee>dOyZ~L8(HgLOvfDqBwksbOy$1!eN=D-8Jk2jX~Kqa;35xd43uLgzhw7cBlDbk>i)Ox4%Zt6ZZY@*ox4(;rk_vzbBz2 zeCH2;9wbd#>T*`*-*fguJ8!ZdG6mTK+3$gNbgJB|AC4*DTJ}H-s6CJkl4*EDf;;8z z-M`Kfh<}_u*c|(DMiKSi3NA;Z75d{;gRuR&O5^|E?Mdk1@rRQ;zeWBK_J3ajjzG5k zoZKS(|F@;$|JQ9qgnr4LQhfHi&JH(D`-4}(dmuP$H^*4=3pCc6!rq&eMqTztJa|7_ zv}odJ)H1$=?33p8-A@pzIi-0KgOm_n`R|$P5zh5>`c3^e*pWa;5QvTM-aF>k2!TI( zAAykg35+HH^8{NE2nhlqK_Fqi7y|@C;tvPF_Yp{lKh-lJ5E5UtZRr1ZF#mty{t1Kx zfsptUtho6zkuC&6;(r}rC*0q^MzRnHiT?(WoN$fbj}Rdc5(GkmKuG*-3=jwj0tina zVtyI}1VZ8qF#5aJvJi-XKc(#ngv6f;3@4Bi-*I{KFAzZlLV`d@0KGE)7l?$fbq@qW zfG{QjDmmgc3jjLFrW>ASHB& zNEejef=chbCm|$nd*;p=&fGiSn|b%mnLF>DIiU1U0F>T zprN4wK7wC>Is_;JjQjU9?5AgBU|?WkVm!di&cb}~AoGbMN7>kUIH9~eoZQ?e&xi?} zJbjLjn_Ey?=-l}Wl9G~80U7zr5^`b}B_)0?Lc_$w#C(vMgN21d;uQBOiGTP*{RkXp zq%omAO-pkc*mszQ_Am{#9)JP>4Lu0$Pk{g9L$i;Tj($G_Bh!I{;04%2z&;vU+I@7i z^z?Le;MM-%JV1At{>Z5d*Y+QU-(xuK!Y&aIk<7?|%|0)%(H^FwKXqaM zk!x^@RrX z0cKhnka@I+0T}QcC77SAa91z|1^g?TaJ)&jhlKDcC3fAA|69*5S7O(HM$Znv`Y&lZ zIZ?MVlx~2?L;9dK#y)_GwS%^$z*DFs0OSRJbj&^1z zu%=QTj-E64L1|K1|8naF6@W!Ks5qvo?Vpk3%4)TLxUQivCV#bkO)7pY{%&T-ad#%s znuO00ABWvX_95FpNsGS6o zgi?AWAL_v?S0q7r-scURCY=!qTkIwab1zEqPGui|8ad;}lKCS&snSer{d}OEjdd8|;n zc_|RDxR46r%^6KBuzUZ0U|0odajP%txSisXgoC*q7k*0Nv*%O+S~R}ryoy{&w+b_E zZo<3SfW-SeDQ~FWzI_DAn*HQX2q61TPkF^3^V(S~Kku5Q+xg}fpT$&qCM=1%joH}2 z+eyo|#76=`O76!M4)rAWTXc;NirlOmf6aZ$Y=UHw&&FW4X2!)>rRKCfjuvP+rQzfB z^&)9U$K!5AkaiY{I~3Ak_Xtv6)1N099Hj6??r_noBJaz5923rNefhN`ImYkP3rqVk zlRXxSU#c|ZtBi^uq1lojzYClMZ&kY_YU7>y%;(;Ptd>h zb~I;E!{Du!b^*5hv>i7&i>49zDK3Yh=T=B>@wcXwiZb#kE1zz~^!XJt(?+JXoBYz2 zorP`$1JN@G_E2t=UPqLb{L9P}jmAfXUBkw0lbFL9d{i%2Z4=+uAQ6|=)T zyt78JBnwVoTNn&da-;&5Lsa0Ac^fhiI@)v@U%dJH8O6tz( zKiwv@o&?U={$(bE3Rqp+CBiOH*xi0k&0x?ASumWy6zo&#+nYhBJe6HP-oFp75YJE) z5u?PecjBjl`hAs4PMqHK)+3CA7hXsY-7QCnx`om-H4MEJ-`V3me(U_)Fdpu=6_cbI z$r{_lN9yZC;d7yE)(QA2yr3TcUAc^P27$|WKhhEmU@3(31yR;`6y9`Li;NG?8ldxG~w*HoX>fB~~*Xnm2Y zFW}a)oxW1S*?>l*T<>?q1{cw%b`_BD55-PA>#@)|Bx$PL2Y=vp zj9l@;JsIWm6Tvu7cDT}myO3LXdRJm0hf+|&%8*$Sf1yK#!Hc{J;4RaMGXe?h+>eqS z+C&`GR^=(7ncoJDa?i@NoHJDuR&C-qU5H`eUs^(3PrE1XYpJ8wx9xYsp{wq5d2EV+ zyzUCF#X+mSL+t6OVQ%VZJ|aF0R>7*W2KlJMrzEWUf;r@bRx%_fkAT{)m$lfSG>Vc6b+MzzfLnT;)FW*Rqn zXWZ-=!Ijc~bET)l=;noUt+JnA4sgHA=cz<}?yAzsveQRc=OEl{hu1z$$%84o1|SbK|Z~)R>DTAovKsX#y@(&x1P!A8a!oY5r0S1wbT(u=lqx#_vR(FWwuRp_*Hf_h5r6xK(ALa*1@jTR7fjGk)D$k!C1}Ma_7%DhA2zp( z%f&~&PMErH<@xUDyKuD;wE7G6J~7EqZTn-hM{i2IoaBEJNTxHSOkb zV|q+|n(}Ch3Jm6Cb#sbiy$j@?x`@9~3p7cNaQ3EimF-W!&mTYLGtir=8ZxH*%7M02 z^xf*a@kFC&*rB;Y4z6<@R~lSIjyb1J%FLbj)rl8UBM06pw!kh?0Y00X{q9*(g6Z9U z0;j63t1~X&_nd9#kQpoAwVqtse47&_8aFr*UUdDtHi?&E4U@6@BSo(NN&Bsm1o-m= zEjfHBibGu$}!E zJElq(osNy2{79}_W>YlMJfY*yG?RjA?y+Kr-@K+TTzx0)^ED;*+Z?aqN&tjM4c-l8 zP2Y`PyReXvCO|8YlpBA2MO`i{Oc(lw)T?9~A!K-B0~9B>pLW~ttCs%SuFCdsec#iZ z1?tD!=l^={ug94xeDymKsM4&A&a16;y6@O3Pi)sx0Ztg!hBASS0DGxQV!8^}rj(w< zodSvq+4j#J`zUtTq$#W6MHwnk37tSTCV2QhxlnRpi3Yr>Jta(+yGxV`*qKs+b_CfS z87Qz7I!xYzZ8v8Z!#_Wb-4;dKe*g0v6fccj9{>-H)7hO$;cg0(`HNgQ;ye`yL$9P! zIx46@0*)|~-2=bR-Clk!=At^|!#huVY|x)mQcxBL1XBY3y@>wTO!ymP+D_c439`q4 z3h+~b9bwqs!O~3r#d$qnib~Pu6=*B*od8(`O+rzD2^@His_Y$@erNZ$&qp>ZWGWJo z7oSmqKo=?iHz&tXfrYH!jB+Yd&=|dCB16v%QhdreaU~BnCqe}xslaX&bnn2+9ln`e zO$APp;@Kw2Uyo8CO}k3SHFMCaNZr%VTS6&d4R8}son$gD_7n0`85PLNr)W@tE#18X zz3-hWec9XOr*rPGnDxtA1OwHPm;Wf7R9TfAn(cjSRH-n{_)SooG_6G_7AmP>@yYQbW$O$+Wo)|uz zX4u9uxYrC^3{Fyk+rrP{ zffxEkharE_Bva-q*^^Wt2~;I$iWr)p=Sl@Serm}k@J6$@O5(Y_OHkd^p>?@d3d97r z@l%(8IYUcoJLy9B5S4a9fpk!vQ%Rd?iXo-IiF zVm7*|E|0s|h8xvur2^ML?Fl8DLid7EeF>#dWKiaR8;xjN7ym6xDrDhu@Ow9iE8C`s?%V`)PCq^Lh`v-m zYJ*E8v!tw7w9ED@kCzQYci4t^;heAdfjNU?g&b3=xB7WhoaE=k*$BQGg`MX<4H!#e z_MPCjt+lbBIrFOBsPT4;^toGelD63jcACeyzXFu1anr$rMX&PqqmvfWc>ZBCBT! z-6O?adG`#HN{gb;?&Ncd2$2r|XSPv+zs0xryKnDN0R{cx?OEg}^xi#Xl<%>PtYv)w zU29!7CkKI+@t*CMN;#^<0v<3HL+h|nqr|}y==vlW?RpeL7Tins{|pK84?un^QW16) zeQ$Ge;uW^I(C0>CnfXS1btBJ&kR4jZ7uS1fZKC787DKN>ShFeNwqZ+KDv)U-oRIP$$w<}bz! z_MC0QY9=|TB7fHz)~28F7W9vDV5E^m5@Rlo6WRPWn+o*Pk{j&paBByOs6Za*z$Bp> zkPwo1&r;0JJ7hf-5RA@BLG_GXM(_?&0b6?`(F|wnhlikUrI5-|k*%7QMOP4fyPYtJkE0^YJ_ z(Q~>exMOh=J#AXyV7_|0CR7v`?U#5-$ZNWdr}0A5qure7o(HUNQ@yQ0DeZeZ@lY$L zRDjr58r+#X*^O;(`eJq7(B;Ra5m49o06jfYi3M|{NAVC`ywu4V|1YllQHT(wioYyI7oO+StO# zbD5*{B7N-~CyHxIVb!JK6KH2&RO^am!&ayZJNmMD`MqHt9ER}*?kM+1*V6?jZY;xaF5AuY%E7r<0pPUtw&kNh~bV_2v#WuWnTpL^f&b{rDie zpA|NEuQt!3ebI{|iSun|ZE!hz+K?~W-NmWOua1wUNlw3}qE#SFl5w;)Ub^_r+)*AX zU=*WD1tOL~cQwx$Jjf(zcwTQsHjKHK6z<42npz2r8D@55N!J@p4UztQqp$|_Gy^AM9*UlTI5!-f}& z2%eLYRG`ZGsE~6_hC)h(tiv0GXlhwNUBl_;$~?7so|=57(VZa)=MsA=SCk?;mnH4M|wCL~3ye5Igo`{)?<1Hx31xkR44V?ZQYb9*|Z-Dn+s8 z1Qp2d26u2j+)!~mX0v0l9=Q*lN(H*!fC;Jh^?Pj;n@%OjeJ~pJ`=>Nbp6;smUMXiF zMZLFFmJX$b?w?E6Em&g%Y|5tcXS(?tU)+1>?r^x0>+nkc zhF+XX;PPD+nYENMD<#r_Ms;<+P6OH5TOl*P!kEUdv+4d_`rkdTWQ&03#WOZ_P%7fV z(8fMIns9X^pGfS%t%zn#y(i<)WE$Ud0%S6hz(xfgI1JD9kwSMvC)$NutS84AuhPBq z7$3r?^j67RVodN_9#aUc7fG&ILsCmNTHfu3BRGA);C+88f9c(>PlQX~k#)jB-d`g!0Hk zoP5y!zku+PB6GmPS$-hQ6O^o z+dKH5J?Ln(3|9&N;?91hfJ?bEAnYQkPx>ENH@)PkgT04n_;u{_Zy zn|v?MUv!8_D1m0-L*hEP??+&}1wC*R+@AIRH%$D;p=VyzP4W)@vma1&Gjcj}R%K9&ys<0%h27+w{*DkB@=i>L z{&H!KB;2{c_?PQn`@aTb=2~63ol2h?5QT}vSrU9phYHBU)Qgmk|FP2llXTMfd436B zhV@`hp7~c8x&Mr3W`S1ZZ7jbEh*}a8SRwRF{Z?I{$J)6qs&TR{>Z&G3u5!$_ z$kGFKpFiD_Mh!@T%%1|0TjL4o@Jad>S zS~l2?PeL^lcV_)6Pc@`@?pl@NSZ%7BWKibN$Fp&lhY)d19CTsdP{|lu?U{;<;4LP8 z4H|54mk5X?}E?1omi-X&p1QE4`>*;SpkQ#_=sb?o0cZu;EedAA|~y>@jU<*?&UrEQn@i#Nj}D{lP>)$>)qdOLfS z$^XcHDrj}}L~(=RqQ`&Uue1@XKHtygrHuc{|fp; zkEbe7g0xXRi}?$iL&!q|gDuYa0BUN`JDfEs;2T=YK0m7@d2CQLH>F!_A@_)0UDRR?zQMGew&q2sY=ca^EfMWe3%=BVmYc?(~H5fxK0h$d{%q@ zqu2PZCHY3FBn|yy^;XcBAvV}pM%+c!~ zTy(GGRptSV=4X;v)mIVj=EvK!^Ag>BZ9EP%4qKZpg(S|D?C##y>)T4 zEl>@b5^qaoh-o^YCNG~;nDSVLZ!7*{lb19i^LxAF!z0R(wY)1Y0@i(4MKJ%Js>iH6 zr_;sQ=qVc}c2(1XjRX7D@;O%)Hz|o@)v^uSZLKzA8nKw~cUCzk9-6nVv5Q(E=3AE; zQgEA7zIqTvy}shzBOKN}PI51&qVA~(wn-;E9^v-af;^0cpBQ+YEupp`xK3l6n}axF zyKMyB>hW65d)rRBkq&8CiHz|oCV%+vHQ zv&*86X6$9<&yU5D%bta?H~$Du$Q;l;ZqyJMooV;>94cUNb(A0;+4?JMvD&qsc~TI8}Ra;D&VZ~E>79%%AVr=d}Q zkUN64F+a!tJbvkfo`uGTEqEl7MzezqPw|d8g#tteCRp%z{zH92@xCp{nCXWy7$-i zGceQ$Z#q*zb0~re)WSA@OjCiYkls0;lEjoj?2mLWNG>q$7aEdyZO%XN;Akb6(NHV7 zwOF4wXdKE$2-AO|-ZNGfrF_%M5Xzf;XP-(L<})3;0c6bIQm02w2{9_Ve|(|zh|=_r zALD0#h`oXDj+5^DD_SY(fPtT<>=p#ys1gRONa0 z7#vZnD1(ZcS^{K$>JloDwqBhomt20eQ3Eu1B%X6f|)czvzeC)CFLnmsi zFVh}i%6;1oCFs}bT=Dpdb&5GvQaFlwzj1DVCm{})6-;B!4XE3fcK?v~>2lx1Al^g8 zRhJ^?!yhJ%EEsq&>`u}njjjyHea%3j@KXm{ZI>La!)(uRMXC-+eAuMZXMI23&TLyc zY=P@L6*j^xAzT!t5L(QG3lm&pyLemF`jfx)xN6wA$8ME{p~XO*eZq^&wethLr1jnX zQwN8G<9fg3wn!eC{ZOdJWp&4|>Zz%6=@^EsE&ztba$sT`^%&1h_$+GQl3!K1F}#_> ziV>f`@c?d!SEx*%-+Df;UX7YN>D<63?9V8NuGfE5FrLU_d)(Zr2Wx9l#$%~xcj8g< zwWe{!?yKKsY_qqd`zqo?q)T+p^>vHQOqhiTH~ihn1ISPPotBkE1#}slRV8Ue|4IHk zH?+Y~-`Je_Y?OUri&wV#$vdOhziPV@;Qb-YNUX_r5cZ9fIUN9mHRViz}R}%QsV_DJa5TFs#qj2;y44!0~Y;qEp4Tt*4>DnJa0;B)Nlecd~%>(dsHnKbAz zk))Nx=gV?Z$lQjhk9fV_cDyK`vh0}X_Un+ltZ!=NMPQmMq75-&XM1_aj&w*Ao+hXo9p)2MaGQT+J`24>wGHV^w`ZY9ZC*6=zAl$ z?coG>i1>Bai|syRlr#4D^zG+-@njK~dn{F}Es%xvdS1SS^C-3HbT0+5mC3%c3je-% z@92u?cvzy-THb_Z2;|<3zeB?6@n>Yqv1*z{4|x?el$D2VQnBH&vnSWSwjDegG6L+^ zlw2060rMo1k@{<5goSCHm51`1-XK-dSGCBm(ezZH+Jg$DG}wE`T&?I6+c@Dk5on-Q zqY>-Zc(O2{)0qus0?(Z}ZiEi@gJsv95!wIF`l3hy^H2{>u_eRm>6bJ8IphctL<}+9 zf%jzh1F~{SA6~u@gK1zpjpsj0C&1@cKy#78+Y9Ok^DnE>jlR`~W{;TMQO(P65C58a z=Hi19UVtp(3tt(2d9F&wE018muEplh$^WG!30)+7{)9SMAV*{_T1hC8bGtVtssb-5 zw_QvSilRMv-7PTG`4K!JZDL(rHiJwB&dMxwt?eFw&qi0gXnac)OFa_os361_T5d}G zXdxT%RsP~_vR)~QXJ7vvloV%F)Wli5-e{rm(sT#OSC@o-<`L_M@EDVqhL>W)@oCx& z^mP&2(MlhQT3Ir5wATri^;fnsZQRCM7v=DS>Ss|+w^RntJWZ8_$@$!V-noXv)GmQQ zG$Vr56y#T~$@}e~cWE5>>ZPwlH8ogiI5K%8XWq#THIt9WvWAwXbPPX&EwIg`tNDW^3C9*#2VKiNZe7@e4P* zwxOc|b_fF^%N%Bs^zKK_@nGy1gc8W-zaFs?pW`Q=NT)|3s$&i4!L)7svtpY$nbjD4S#M>pvd%VH4sb~+QU z%c2pKYAeyi`qx^ZnP*S%@6N=>ouI>+COLjl9~5hlUmm7#6s>@Yre{<02odYhj{Rml z{KbiXEm|&ngq59U-ULxMOwdFJUhQanaxS#K(rEk3I<3?6(kf~L%0zG>OQmVpiOyCf zJp3_c7;=xsv(cI1T7%+9~wc|E*n%9bUgt&+M$3m$Edw^wwkvmn|00qxBIE(ljqJPgkPGJlL2V^f`C? zu%nvlc)Ugz=iJv-rsvAghNuZ>=KE=^$RHU*q%S6h5{khJtf6{}KpX@<=hra4k=F+i zp|kf11Son9O{1Dfl{D&FK~QpUC7+B%X;JHg91SAJ!s$PcZdDx+l_X zbi68*`qW^*W)l@su0vurNWWtX#`V*hGK`I`m)O6R!#sY@vp%0doWB!?c-ldhd-{n~ z9(MD(WyFZZRoNisw1DQz^0Ind{^DaSffaUJq?%wi```j`v3su;X?%H7C(rm&q;eCh z#ie(VX7`m8?hyGzu9m&3+oH%9erU#a^JEsAkA}ucG^l(y>0-)OhY{;)sx|WKJYBpU zaWFUtEh;wr5oJigNg8xMZ&VM#t#+Fv9^%1c zF+~*XS{sC?S=;a-P#LUL)cGVMq z7-`^GkdGAM`64O5IdgOedI(o93vsf~?S&r)nH@D(0)6yaIeJQqW7k6wxDH{WVynrh8NV?7{D%;|?+b zwx7j1d#1yhWZJ9Ea?#AThX^)xTXaqAr2aba~`z=WY1a!wWGKFu^~b5*i7Hzs$+*y08ELST7sXmaDKmBfM#?vaKX} zR%^O-cew-p%df7x&IglM?J+yUNvXg5X3eD3x@swyFF$&M{^d6{C78a6ScvrNyb1Qc zfJ{rfr^K#J^Y7LSBX&EDJYu2W**o5Vncgh0D*IO?_JesPZEd&`JMiz;^zS3x_2d7< zR|S+{_Mtrp&?JMe6teHlq5Zq`At0L|nLjzE$%F>qD0#LQa9c1nNdL1``!CIaZPa(C z6^$~JFN#}w=;(+nqO>1uR#&PxCW_TM_^PGJ2w3`ac<_Jnzla2^LgZEv-%d*0Q7HI! z660QAJ3vAl0LAa|uWfyq5iCkHBZO{tBR3QclKK<9RVeD)^@u#MY#@d6vHbqON4x&3 zulb2?@jQsRQ1XQhxRUmhJpi$QxF#i~?0M}oUttAJj#_>>YzmfW-H`HOdAR)yIg#(D zOq#`!PG!+{*#36{TEEl1|13Tq@Ci$b^C$~%IQR7nmM_S)?MIYiT}9|r^47S|J~_9A z!;clNg~cu#djyQO6L$KL^Q$>w`B&-{e{>FOMTl)H@3O;91OFmcdqwPb>fi3cLb{Xn z*Z30*SqH{~Tlqyg9;Ock=4~L%`*6jH5n7Ag*`L zd~-)&14EjwRll8lT)^|DC%APgP`}E`Pq}GBFQK|~n<+CH&zit$g6n%HVbsC@1?C^T ACjbBd literal 0 HcmV?d00001 diff --git a/docs/img/pai_token_profile.jpg b/docs/img/pai_token_profile.jpg new file mode 100644 index 0000000000000000000000000000000000000000..52d68bb7b571dc71ca51b62dfe722daa90048bcc GIT binary patch literal 55722 zcmeFZ1ymegvoG2cJa~e;26uN09yGx%!3TE+2oebH9^Bmm3~otq5AN>nZjb->-E-bM zw(dFKyZ7An*2~PQo;BS)Jzcebd+)0H)$XV1r)2;`PD)k^0D(ZjkLNe=GzUlki16?T z@NkF-2na|>h%ZsFP*ISPQSe{AM#my1ASEFtAR;28Vx}RZV5B4>qUE7uWM$*z5sKEeUfM8%=z{0@6!NNYD?fHBhfW?G+Mb0J$ z|61h}0);&m`pV+D+$5b4K4!%e)ad7eQ38>%D(9+R!a&hzU^6^W&m6Vc} zk(GP@K}}slQ%l>(*u>P#9Bkp}K_mo6dV#08yBCD_$?_pJ0~|Uzo4+F zxT+djQ(ITx(DbJXZPgv?EK>L>iXvPFTFqj z%-_}eXU+bVUYO5%y?}*=M?k(hcx?V#r|)-W&so!&~xL#U;-k*)h%td&&xmZ z#|D2e@CO5bFz|N_81;T%!@mb#z%G6Eqy;T{--(ow{I!vg1QfQOfa=?kzir1km0|xJ z{C{PHJ?2dq+dWk^NZaJgN>Mm0m474H&UodUV>X80>vcTp!oPB`2OwTAN4c*yw{fm<*Xy580M+5qW0T|)kQI^yuEQHR#x^l4B@^1w$QNLno~GwM>@yY z+N85B=F*Pc0y_Kbu!1*3f7rw5;6xSQGYU$Xb)b!y#Ic@A(iH+zPr!ODMCY;P%@gq3 zxBxsGQMbZ^Hv{p|95?tdRzq!DxzJaiz(e}0yUljMlb!{$C;GG5@jwiJF4&BNgY5^v zm_#*dfMrmz-hFA@94VdMe8_O1w2IbDsL<3jcF$%+j%p7Zj+I!>@BgVs>1--LBaCZ7 zZuSXycf9D^Pf}F4{*WzJuhY?DfKaqI^?PK*EpXd~QzO;lk>CV3;JoOfg{N~iZDS(W z6r9;&ecV^l545#+d3QJ$eKuR=JMzMO0&I512F)0EkOKn5OTRV-fCA79a~I;{*a0Tz`>UTfZ35760#`6(b(_(_FxO&eP5Fyo`m$_*@72&jP}GA1yHGF5O!n=x;IL?R z13KDnD`$$DjxDCSOYP*)*Rsy)b$H^S`33Knd)q*E2<_jc%34smUm(q6<6KL zjG#RAP%hUslcB$9XVTa8D#zaSld*DiZJZ+E-Q_FG+JK9{fkIs==(BXewUMIEf@Cgc zb32buXwg!4;CsuL(Lg%!dz4*RdT^%jN%S@6=)fD*B-)XyWG@ib!qjX*kVM6a&rC)y=R1iPa*T8#N`FI;#S=WZ*iL zM1N|iOv2BxV!hGzySY5l56v@gG!3-8v}W%k)C}&$EJr3)b&tKspEuXGo~*Mh9kgyz zt`UzKYzg=+2K$kSq@Xa*B>UoCMZ^7;$n2|geV70e! zn)~Lx>?m~eq|3k1a-_+;`t4-33|?XQX{Rh@0n-i4j=q1{wX5t2cC(#Dt)*Rb?fFfT zmQpnxte3NeC-=~v(jDAHqwZ|{OQVb{pD$wqRkBxtzYq5ppMY-(iSnj)?DqoKj&K>X z8xYkw{4*V8a+%x1PL>Wr24aEb^MuBa6rHedh$=$=iezjTRK zrQL8JZ%irA%LX+h$7;%$po8Mm@}_r=7^Y&UQPT8cct5hWOI|FlCu`h)*>Brh6nTKv zz+D!(A^L3l1Q_%`yB->DShBEmSVk5U{wk@r4IPj3&0mip6g#i-kDO&8k* zms8OS^6Bc__}m}OWj%`AiPCdcmJqHB_%|Z56EwapJY*T!Q8){i3VAw!u1ZKzxyq!q zjvso*&Fas<#H{*qw3=PzF23v!@n3b$(}PMT{(mG$x`u6F?fX!K=~?!vY{>pDNax| zOZxphrY}R^zSX_f@$BK|V!OPjtwj7lbm`lhATOU+Xl&_-ya(bgHV4X=J^Mz|Tq*mdW2Dwth|V>xC4taP_Bk zT!Q3^9-U*CQMqr2yGWSQcl$OG%^DU^VmkV&P>=dz_;&f-eJ;D7Je)O1!aZ~6MVUh<)^TlJ))QK(RoOpeVoJqWTB-cd66Hd~5?eL!g@^AP)O# z_nVSJBj-nRVVvZ>RqmDobPS8*mMS_P6Z|I&( zyY51gIv=$mXtq?QTV-siOq%C%%HNCH;x^Zdv}L@E3va^6Lyal2X$n|-_K_oBL0pdH zPf8!jmC&Rsz*Es*get=XI+j84s^|ru<|Am87Q&x^d{j%8JuxR#cTdu8m)n<+Fw0xL zbCc|(e63vnA2e`ZL>3+--Tmzr33MzMEcnXqVUs1BWpoZ)aeA?`Zh?}pE5Rv8g?;9N zG)Tt&Ua=l`(a8%>F0qHfw74$>iZZ4cmlq!#&ibZ_byJHY z#iriW-eZ4KzCoDWRbAbqgkMJ`(NSwg#@y5iq^n`cUZ+ zIBoR^ru@t-!OY>;O}W?@32UAVU8I~4)GC{Oh*-UNYp3)2jKY3u%wJ_#kOa>1vbpRQ zzantx2`KVgQCT&3W#L6pWq_eKJJabRq0rn*cJmE3l4hN_m&6weWjL^GC&hUG>o&cJ zxxmZ~DPb1e%TD)GFgEs{%1Agcr2SaEqS*YlE%=a$Gs&4jqz;qQrTeO9E&~PkI?zx! z{x`&1Ve+%|+VHhJmnXLZ_w4Lz)V$_-83-kn^Wz)AFXQeXBntA|>{a*n3Kk_+&~F%8 z8J_^xC%{xYW#zEam1!xIdYq-4CSY_s?zP+yrLqCJ0|bw_g_QU^Oea3B`A) zSF;@ELy}Z(3>rH=K4Rmx9}pZ0RN!6cPmYpyfm}K5J-UaYS?GE-LmE;b!U)gKX?vNX{b~Hw|^S}UgSMdHE)=ue~Oea4f z4S!Q&k`&BwC6{N zi<&c?c1VHyPe2P3^a<$a!dD|HD3^5h#&S}S&_0f1+nGGD=fp*3c-sakR7FE@9F}vZ zO4)t_dhC}bu$vV=d92t566E|W-%aF!TXU*UUkQueh{Gu5gSiKDiQp`5i=c3N^k@}~ zAAJg;&5b(}m~qhDDq3rEplc;>n-F||JoVn-TKWQ6K64~nNA@*7RXZKE27b5xLXoop z(vsy83a2G6*3Y6D36#oi+G$t%<)UGUVXxIhX|KIQEEfZIzI)=;X{V|2#4e4a;#|}B z2>0I^;a%W{gsQ_lKH#=8w@kL8&hKFkQkQ2DGOEhjY#@HzMS^u;N4($dNM}1S8A`pi zfN(!>Nr!cq35pWp&Eti6tciq66hg$$B(DV=b0n}5T~FEjDRiLm_hXzo$5nKM8G}U( z8^!f#pRUzXDf^=kn-oiX(smgp5+wC?u$R`?EHuLYu6gGn4>f}T91U&nFJ;z8*7>z* z#!=rk!*ZULM@X(a>2-Pne5GeXA8VPekKTF8|4PhbIXn;^Mu{b@?yVaH5w6>+33WPn zqdtXC{PrZHs|8&f8uJ8*k29`zC)rz5$)+cnka_bj zH^V#>kOVDaOnescw8o1tjyd=p)MIsw#;PwrO-mP@|>&d#WoC?}|o&15J&P@iD3XXUIi@$3( z!+F>5R;2fS6sxzU=-l4SC)PGSyNv!39PR1!q3!ID(ynx`JV)=1nK>5f`RcH>-^N~k zEiShXyZR`{){k`+2J=h1)F>5L)z#AYBFqtWkTdLFhwijPyK^@F_{$Jl?c&Vd?y^i9DXpx5Q z_2yq1L~A=*);t04)1eKGlblASi}#Wd-D=VAzRyTPqB}{iRAL1)D%O(Q99It{G_N_F zOOS(^F7Wg|#hBxD$s_2C5wjx>3GF7t5}Hq{?6`NgzxQwhrt8!%!4wMHT9rPYr(QcKc=`&c`bz6p@6AHdLHZB*CJ!g^6Vvi>*1vEcs!M}cRSB{%aq zKn9(7mtq^1bUuDp3jt`e|Hyx*4e0;!t3UibvHODmf0pkK+2l^1G>`Kc3d<4V!)C%t zZu~x0-OzV0ay)V5FUWuEdH!9NEiDfop4v{7_{UbrZeey&Q%rn>#7}+f*5i%K2BG!Z z!@4VBD{&k{(SFhtGRHHtr1I7fI+TAz=g$UvaGJ$`L(d{|ImY3UU9)9~=C^!2g#q zpfay32DZ7v<&Z{U3m>Km!~WnN1Hl4OV~sl{aK&(INP*(x)-l{MEV!C#E$g5lc>`-Q?mt?P39&iVxZ;krA z`MY(+MPZ+#w-|Gfd0|<3*%8y|>LjGV|8YN>*Tgo9h#=wbUI#rx=r_iL#nnyoWnZ?$ z7>$?3Lb8G{pL5e++uB9W*}$aNgrrXZwkRQ|8t;W3zi}<323ZSdJK9Fwgu;-2e`~Y5 zMmO_C2jcb4v=Y)Oe^(r;&JJsjmD7Aa z2qDaMu|i1&93W-<0Bvz1e|%orLSE_Ve)7o-nO^SEM;^>5R$k)W=T`aK%p)^UnD@WNT=K_96d@}VEY9h5ltEjXODO!N@DVB~e3<^D zB<_EDzp}3WtQM5P(HRAmBWU$i4^~ficDZ}Tr}w}APRfZhAB4|9aWM&_(mw@47IAqO z>dHA5^Z$#s`wwiL*#ZrD>{=aiUO+j4+6al^aS1)UxYe)+MST$XQ>AOE6oqEn^D+vv zs&5%@?+UgxurpRtQWB~rn$^%UxGsiDTtm%`$Ys9>%baTZ

      7l`i3PZ3eG0)N`I& zU8m15DHq-)f_!l$zZPvMA9MQ88IxcVY{ITZJH##G z;F9?AgPN&bWE0Pe2eiiaj=Hwl%Ujc$9PSjM(aKuqdabpn!hU8^ypMcc0?y?T&36)^ z&w0cqBj$|KJ^q@`YJ4LOT?J0%v|N)=6gsjsVz&yhyx-~1e4{Gfe;Swn4*mEK{1()6 zo$)AbfbaxNb#_07aGBn9Om%BEjdAK>*WmBREA$KX{Rq-D`S1$KDALMf!uIuZ&TMb_ z3Anwke*#9Nsy{2at`uIBBxz_ij|z$r_%B&SCLOc;>r z>_g7y8dgu&8x)e(Kbj$gPj9A{$Cdq7skT4kAAGO#ICxxoyEJn@Oh0!(qIho>w&o(D z`++|(SoPPi65C9_vv(or(vtU^jbojCKM{~gqB%uecHRAMT$oT_bJp$DAQYY1(tyrS zRNSZYlr_`w;vhMrqgIUic(T&@O%_fHT#kgJ(AS76v4#&N#Hj~s2*(8sj>1dFs~S~D zPXLP3tXa@u{=}5*Bxk~yt#MdC@H#jrNrUkF4>T!hST&auwTF#^0c{-L+whvw0Q_?8 zolmN3NAxeyR{71oq+g`0^ayf!uOfStai~vR=!9x6#At0Lm}jG~N^Ro7Ze{YV&1N5W z2QdmVfO!HhY7-`Jk(0tcWx3qRaU`g}gJ}||{4$KyR?*>!?2T?Cx!y`$kJ*cxR3h`v zp&VavqK9%^uvk&uUYoSwkLtoN- zy-uhp6%rN(60!TltQE#{cMAAN{9WRbp3l_f*ChSo7?!NFHK+3moA{htac7i9n~08G z0#g+(6|Bcf5zSoL*Va~C#42Qs5vvX(-{#lY*@!{JsLVOu=)?CzLq|H1vg1xE5~f83 z`j49oan>b7-}=bk+2J|y?Xrc)e12$g^a08ufQT6Ea9+O~wqS!8jre9T~ zt1dhNvx9WCIW*4|OH=)~GhbC3^)sf1Zx>$ALb}z*&pkZfzdLT8;vU9uLu&fK*F8W9 zYjV=lI~T({sM&h*Cdp%EkVRb{tF^YN$(MI66)<%1YziW~?{mD4s0m-mAlxBP^5ZPA zJQ60{-v2FaL2l~yXb{)^C17D6GR#|C1+LRJ^Ks&>Fk;i~Fa0Udp>$hyb~twdOon@0 z>gaiSPh~Nzw8h>>q-dTPH`)eyS6V}hY|GcHaWSYI{F&$a4*2ktfnGz;N_87&PYVG7 z7RvS*1y4bb&94jD-8PUuXxgygN0N96a}k_S40cX2zbC*t-1IvCa#94h+jdo;sYwsD z3{-cT8^fEIVn0uu^Rox<2Mx9mZrT%|nts%`6vM;0^h)1?)k1wuhINk{?rj3QF7xC( zd#((KP-d9-VDx({(bDb}fndK|0nH4`72_0>YfI=&AYHYsVh-QIXv#8P>FSAXa6p;~ zby|}A`~kF*4tFtLbpja$08nLoQb_D<%>Pz7zt>6>)2`_H z!SdVTs1ceXC|VSc=mK`Np&!{Y=XmK?a(oRm(O@P~-kA|Itf>osqBdgPW=N<+sY9Q3 zOi*RzuI#mV^wH5#Y%eAivtX1G6FHd`Oa+(2!dz=dS^VIFW5*LPJ=>a8Q1-RX!2+2j zfxPPpb9k4I`e$)R&ahTyWGdaW1}U7Mf4rvP`uVha^FVZr0n zw}+<(<%uG1#pKJMfL`h2f(OUvocM1X|HhlgtVaH#<$z6ufK{pl8a(4@2?lgK%b-}X zk?xLbrhr(Zbz<`Q&P%6Rok5jUBm8dn!@aoJN4b`^hcCv@Q&Yd}XZfug_SXK##_Y{3 z?uz4NDZ-N*lA*NEWT5Z`= zZomMpG(x;wq`__LR>zSKo`X+7wsttfqnuKZ_ulB(6JVCtwsZ?Kc4&|Uml1C3OfHlc z8EEEdIMKH-D{ogAz?bk~06UNdfM$fz%KL=m_CYv(FJT7SA* zchcLxZQbc4@xfE)3E0lkw9cE~6UI52@J~*tk5!M$_Vew0Yh8EJB|_jSta0BqKv{A# zclHEO+TWWM>gc>F@Lp*O)x+=G+$hKBC#VDjI{0AH-Kg9(bP>rg9MI)XYrK&xTzrQ- z9Xm$n z&8>{04`0EZA<-is;8Sgp+!Sq|*i@TYI1?Vd)D_DsTs#fUb<0sY;uQ7z$VO3}q0}_) zg#o5BLC>&qq(IWfGC9Q>=-=1g?jgks)T>xuotw~w`?mX`)8RL{XzyK=LvEvGlypm$!!0P57N zDt98d9q)J8_vH&pp<++@t^^lt;mBg0pVSlOdHiov*oj?VcpaiCO0jhd?FEczNQ^r5 zKwDh(rp{Xw$MM!{bb2RrV0{dc+v#?>r^E`UXLfEi{Fq7&(*@Kf49!QEG09*BKL0pH z7uv~%UWnVeu0LZ&Uj*AgtW0Zqj{qzAGtYogO+79hElb$GU7J&skjzziS{!@)$CuH* zh|5#dYZ=$q$K4MBnU4y3&P-Dm8;?frB6|ZXpCvQ5ParwJM!IvSa~&4mQK{965+Q!` zk>cB|&2EK`+$-2-isg%e;nvXwImBNu^yI>;W}>+)Z*ggKES+*gq#s*1pL-~iDr9zH zN;XilPSwUKc=AMaw}&^{8;j}YrR?0Zr^M24XK|#sRPqD_u*jFMb?jxgTW?}RdEO7U zy&zAeUn5H3JLFe?leVUyp);~F?i7jA>kx*X(t2V9_=AoHa2q+#x1J`e*g1J7y~JUO$1=Q;7B;nubb*We**ljkPZrxcAzSfeiio3c=mrMdV^Uv6k`#HA?IpN2M)M?MYAt??;z*5* zLu;}Vc^RoNnqL{(DGV2~D*Zeb41SGm)k=_@oe>_s-)ZXPy?Ahc0`#Jwu}TFy4_>AR zZJ}J&Qd})#f^0%v3F}t%F*_77Z-gPu;2I zoCO|vt+W$?>PW-E=8HU|OwfKI{=4DoiI3rAo`Q32NzJJB$_N8!yp&-d_$MA}{Z6>Y zK8&w(UvlgySj1u(wwlLrkA3;ssalm>JNq2OE+Q>5-|gI_oGi^{5DXSPkLw+E@&%=v zXW<+2(bf5Oh{FcIs;4@W6LQaFGa+sl$h2uWcOMeLFZ%lIK7SWuFUZ2Q5T-IHr!5C09`;z?SunNnq2 zWl+`?KdaxCHD1DyT@|GkAV%ED^kKAZ9n<6q&_8)Ti=u2oRtx%0Z}ijufk^bUqsfqO3ZPz~ifW`CVwa1eAH>RShJ!Y3k_%nr#vJd{&ce<|Y(Noq$f>6br zK5-dmq^nTuow*E)9okG5L&UP2+Dds%3`YAd%5dqA@IYxKJa|f5=_}_R;dXP$H|5~h zc2>k47qAf*Et0rP1C0Mj5cI$0H>&J`s-^6x(wFAm4Y$caLP+TT?M0j<|JTq`|Jv>N^Zk!q|6t(%00#O}IvNFW-z>_nzhbbq zV!(dCt%-s@RE{b>zQhz2=b4prreN#n+ zN$wdbp}dVrY%h-3SC5GK!@Y>4Ow>Mx*b(P65qW4cZsH4W%b6hMya;x_AFykS<<}9r z$V&)%^Ez@>9osLf&P7-=QEwUW1!Z%78P&$z;E2A+h&wh(D-RbfRy6~~)F^ZbLcI+0Y z%v^yueGtP)59uNg%+jzo#MuzMj1YpRh3)q28U7%Eh@uWWN~#(F5*@XIXI zWLZ8l*n5L1nfF{x=c+m}JKolwDB-yVUnyak z@R-b3{eh#)>5+{EXitG4H~;wT;CXnVbbhA`rXP!z@ zqZG|ciVy&hviEextTq!ELh~YId=pd)1kbyPGCqkCV5t!jtG260h?zN1&9#4tH@Sb# z%T02n7tazdXG)7EjVT&xl=L9)~tXp%*Uptv^p^Pr=)*;IUQ4;qwhsn)um~I`(-n&c)xLn;Gz0d^N-OUh7L4*TG8|nX6P*2a0 z)H-a*dFSe8J{-2ji(h0>p7%^{H4WeXRa=0RiRLiep-3fL&Qu|v$xUsmiG8$sf z6OiCsl(U!r=w@YGh8j?SIDc72X4m7+&ketE)s5h%`>@Vne|*kz zEcL-xcAcxXUi`}VYS2T{=%tIZ?WL^?S z6QbMqdc)2()*ZWM)ry5Y^Fv!|%s`0%HKJ;m^G9dyiqDfc zVANLqQ^yURB0lQ2X3s!Xa~Q>tpT}RvhBHg%hp9$`_2hdxgMVIdQIDT4TqjKS;p-5K5d99KtM#(vn7`QEl8r%yZDu8r@muB=WiTMR zIt!~#Qe2g^Uk|Zo;fq1+aQkfCg+b0SOWev=N^uFNZeW7To=3{P z>uAyXW+KZ|N!gcIuqD{mP8p>_@Ix&RkIO;nPPLv;_YB1n*1p$g_E=wO`#n)&ChXp8 z4%LQaoa&ZjSrb~C;B~{!sAD(bfV`849DBIzSuMGfp}sMJt`2so*0u3vWdtQjwK1&> zd8n)5!0#+ZMOgt!?f+&cguXn_J=rD2-sOQp{D*sb`&lOP2Mw0z8nkT|>ah#H+ypH{ zox-p%S>Ry<|6HiV=19G=@|&cyT_s;w4-ZUuPDG2yOQ1>8Y}(S@CXYc9>_j4oPy3<7 zXbJ@$9D5xoR|{6x`Cf*VuEX3#^HI{}-gs}smRB=BU_w!H6sAe^A_5jB2KPW0Va)G>FNSI1}&-EkvYEx zynv?}T!?(PM(PhZovVMgwsD?$v8YQJNyb*o`1yOo-47%DdkcoPS74mKO`lXs&4poi zUQJ@1fW?Sf8crzq)osO$9&3mcOpiL#0u<>>p%eGf}k znfcpd)o8(8rvlo}hQqd&X9AzZ@p-|X*~5pW`^+ccqPXQKtUR-pT?_IA9Mpm0nA$E` zBfVsX2Q*+p&K*|?wV13}ti8RBoJFF{He6mup&<_Opt$UP5GH#rGK=@Tc&_Y}V!|lG z<-$c>y7IokRE&$GrX4nUIC_-XiHZqPQl$e|J^>{LnFSi@?o!T2L7djyYx`HTE&ip` zs2jr$LDNFViWZ#-IYvcqO};nRHHH!4`l{LA`02`nlf8;g991HfD{Gs=IeU`SSFxjD z>Q6!NnX!svPQxMEZ@q{@7~BZ(y`fn(fD~nSpO!SR=_5oC@@w*5t6)1(Wd^F@T6bE0 zUS0XMpU&V2^EY}C)&fW8Btco;ew3IS@kESmdKct#lm8m!1`W~W78a;YQhCJLL(fCH zc6|Tp1{bM8!UxaYFYJ`QHe*#!MA=WJLHu}j`rUEAJvTO9#Uj1qZ}(JX_)d-1mbx)3 znwF;6{<<$HF~XxRh0lvjkY?q>EPKdV(>FX>nqpk7$^ z^Ceiil~Nf;E5TZGf=0CG+daUj6HP|}#;>HNDe0xees1QKYfl+e9v^+-4DFwEAeh7o zp-ps(X;AT3nV&3sY%|F-?=RZ*GD)>UILrfj>(6!R_-sEV1LN5Z;R4;fF5y&VG1fU1es~ z*1mSK=jR3)TP7GDcF9j)5cUWb;>Yff?MtnZr8<|o<(-uYGEWWZ&rIWMOGFLVZOkjR zm03n2Zc{vu1~Yr9Tkz79r$Y>imV{dSkG)LI*0$lT`b%+W)5L|K(;Lsrqgj`&idKfu za4Hg{(IZUcP=RQHLVN*St=PL2qUb5|dKHiH?t=M1gSk?GC zN}?U>{w`|wUpg%%VOd4LOAvIw3WF8%fdh;n3X>u)XPct1C1R95)lV`a_8-rl1g$Eu zP0iz$hZvylyw(x3rOw~Z98PF8vX6~j%IGQ|G_tlgoKzKWGc}nuGKe!8i--da?HNnv znQJI@uqq+TEG^*e>F`gCV{G#{3KmnE<1v^Wy<%nCORk^%E>tn2PYvPTpMEy6q9rr> z%k6}-(vj&m7kq?i-(b#Zut3;((<6e^re{U$L29bxS7|*fl3F9@J4|E=yZeD{jW=7i zhqtts*xXq;)4OH(E&jP|W~NjoU0z>5xTKSgJqJpc$NW7YAStc!?v*6){@C_DG}?Pg zR^*&gmh?Wd+I!09xj?GKu&-}PsTSw5^2Yv!j9$m7qLMaT`1dx;71CRI>5asBuS3T$ zr%V*fwi^KcE3q_!aY0t+_bC7JwL$Ss$%l94s_xoa7T4W)R{T_kXUwidaYSum zc-s`&r%}Mydk^01nqDe9iF}|oI&D7`apmjcS4I%wbZsS)@1J`Qx0ps1E?6njPq9@# zx$R95^Qe~5kxm>BLfm1Xr6j_FFV8?R?^+(Rifiy($2H8oJs1*^YhwzBPN`q9OD2__ zZE(xXMf-Kj0bQfnVb^s!-!j=*;<74x;M^Y7P=he{yr{$Bv5L77iX>w7VAa)+^^_cm zJWMl4jBIFJ;1P)1URr=Ji|ifwJA;`;UvsaW0jvTi-FbjLw>DEpD`s7@@8O{pookhw zQapvPLnaEQo-um%w+W6u1zau-zBp z)t=XW_~wR--YRodi|if&xn~o_(@swTXX%5_cF4;PO&T<3v7@5tNg7};OI#MUPCAcf z?KeZ6?zoSh>r`=;x}c0ss4R*Z!!=N~L_D;CA<;`lMS496*~$rQxP!+2?;DCTpUx?h zfq|2E-Y3jD-&FLfd5L}zeSY)-g-o(U*Y2(};_-)d*FqbsGLw_plywbR zCXl}|zWPVEwFa2vc~APzy zWuF43x<)$ho2_v;^=n#a&`Xt z_G(~m3)%RzdTI%~+Bt7VzZ1n3;}9ci1fx?6@tkb45-kH?#&Db5&;--giO~L;mK7$r0i!|r^Ns@&sUn*mQ-0*9PqvIW zwX0N#NAX4tZOdB2d+%pv=5{WNtSvDjuns?1I8+~F_ifM%46hR^BdonNti>5k z_MywV+CssGC>ZGS3q&F58bv2mri#gn(%(bhW@UNt@bGlkBJ&iy^JaE~g58j!$op1i zWJHuD0ml*(f$Q3zG~@cWt1~3BIv*v&WqyU{WKm72GJmw?u{z11U46%UBitH~8jxa^ z|JJa&?zwq&h+;2jVr7!m!VLGB8o#*6X2hOtMYbPjB84w}58+ej5a&b0Y_JcY$wzlj zF60g`Xi00y(V_EQ@OzsiLPsocc0F+m(AQ=ojWPege}ntuxtKe-S`u)}hD5$=JiZDz zFU+mKGj_$X*uIR(gp=FaG(o(;!L_o<0GqW){$LH=NnZI9RPBSKR4jt?+M&d zb}pTB&3Z5AdY|+G$)q>5xZanjq0fG%kQ4TooNf2`^h*083-@N%)BYDfvIUf=Xw_-% z2GeWW{zdSQpPZk^`C7urgrL{c4oP(DWe9(@hlgvyPO|FGqg9pM6mD$lpg7^MQ8*AL z5*$hO=&S#pzlE=Nt$?8 zu(+LbhYwxgq{5V^IpPr@jZ3ZNKebddl>91Q4ExE&pCpYI2G;G72F3Lor(LIA!B)f1 zi%0C?Rj}aOU@&v2J(RkoX0*h}CqxQiMfKS!A$<1whl8Jhwo~sDwH&l~BluLSR|{T! zj^E}J5CpFV0h8;@;MKAHs7 zZ$vZpH5;SD(1I*Y18*L`fBmuT?0x6yV{b%TXC2<{R!CEUd81d%C?PG1lqig;l-1g< zf_N+8$mG$1co!S3^;&G5aFb)?s1GU%bwqf9)-VOt0pc7XD~ck!oxmDb$!PWL&-JO% zhZz(VQKN9j9z!>g1UPfdXg^V+)}oaMoH_aQ#|5uJMswqcRXR=L1!*q_OVwZI<(w_- z)-vMEX=<^9U9M}_j!Qc#-79*p=)RVG%!{uBh6bJu*QoeB**O->F3Q?ccaYC6Ty@A& zf-_O}`um&iZkz_WI)s%;!Htx{`tI}*2N4e?5=!6w6smPBoyl=yG}kMIWQBT*dGv8d zDwuN{>Z@5e1DO{s!!k0t%oImTIWsu+mGPPfY#!FQ?Z=9Qp`f&)=NM}(`w3{^IW0@P zqnR^26O62}7S`Xl?f&fANma`|rZ3@2<(3Y-wXMHz*z1;)<-F6fU2TO(=>E!6eU}c} zI}pX^kaw&Y7RoCPSW*fect6zUrOID?i~e2(k~}IPg7%~KnMT$ta!lW1k~_U~S*1OvVrvEh0A@=0sYJtqodt9CpJ*_#GGJ zhMNTE*vHwa1r~CGI_GkD)URkM?rd5${A~oezZQM|_pK?$9=OgO_I5PN(RMS{(ix7DO>5G~-*@!&j(3{S zvSLb9_V~+#%nWNV^fGwogazu`&C#i9s+wf!3`_*SnYv?sm?QK{TPVJ`sr4oVOBsG8 zbY zd8=uE8Fgqt{1&n$JU)yb>U}N4HFo-tb3hs~@69+_6A)Uk6LDl_Md`YdT4_o-Mir~+ z?iFiPY{B2t>?2!sd_W!&eHyXsC;Dpo!1mQ%nQbIxpBTA*z&oNxAxEvHGm0J3iwn}n zFs>)tuPey%AT{6wHk-6{i2A{nm_I)`|W z#V++0`0uB$O588ANYe5Z*dIMvQz8jxbiIuJe@T%S3b44!I?4p<3Lve|O-9FGJ}=rte0!9wX8PZG>E#wPtj; z4XbAz4@KHSyG=pT8tTj)zWfe~_6MT{Z6nYJg*A_Ud&=%BO*U1bD#TSE4lnQPcNKzd zZ4as353cr#$AQsL0OGz5+vcGy)U5nI_3*09xvEZnvRU)QZX_u5P{fI7jhH@uD&Rb{ zKcGIn_g&;wNE7o>AuNn29xEIP$q7w#WEH;2sr{9^W2Duj{{ob587J6C*|&!k?+8lt z;5cZ_9%C8v3O zLBGL{XH@6QYp7S*mG!}l7bZ{mAA1IhRmSezHh4UYhmIr%3)FS3+I^?AQtBdhU(GI; zb&^)#@$u1lM03G{9ozRv0Ybz~;2clD?me^Rx|!Y3gpLS_l!xlblj6B?_sGgTKaP{q z$~bnsQbz`K5_`4l-lH_JFr`_~4)tYmyz2+yymcLA5`fEguW|WUN z&08Y+VXP!+sp!}(F1@7^)uQ?crB~>q8X}dFgZx%-@pqG=tbEsSh39%XXf91{GVr{E z(M$3u%j6qDRJRCXgvMq9xr%qWecRyv+HU}`O-iXi6)bs9T3GqE`b=N8!&^D*3!CBm z?q#-RI6FU+^cfYW2Yz-!c-_jp=V$EIp5x64xgfYNkICk6+@U! z<5_raxNj(^NZxjtoZc)8U7xSJ_5;~l-jakXN6zTY(8y>IN7?ojr#jyUa8tTO`%G8X z{d$nlfkj_ba@c|~CI(27cy81Z&*`!7uQhI<JaR<)t6D&T-d~OHq=VI%m>7|#5S8`P|LmqAKK+ML*n3@3`t}f`T3EE_T@#$y!gh7g0d}U z`48@zoO)Km&!~8ruT@t^=p>A}mwb~a1`c9uH(@W1*A!KZ6gBk}MNb~Gp!>8vS57n3 zbD+Jti^m=8w-n$&1*^73?DI(s&w5j2wQ*`l; z1TR$mf2;-Hzj6@eF)6iFwGxgW@uAduB|41K9wM`l^SNW@EC)wZYxLf)s@=ii^CMZb zU?p9KSYw7QhDwI%i}{u3Xweh{m!9oZ;worPM`=6%BJliw zQhwp@z5Y#D`)^zPZHxcfK7jNTV0vD3Q@VGnzMX^Sh1jcZ7FX>k=lPTU*~_M7wH1no z11$dRsja;q6KXf78W(ZMMvv%s?Iz{;y5+e+SOL?fJJ4 z{D0O5WXz=5I#E>-rDTN0n9K5U?4msK5`6n~sJK!&%j~1M>Qn%=+`aXJ_qww+(0z~O zkb{=HL;SG7P0yrKY#Ui8|7=A7Z9#!1gTyJuaWO{EtxMfb`H?AhC$^*chEdyeF^0_o zInpIIcS2DyEeI>^~ zmcT}-REwMYW3|?vk3aqAwBOt@dxW79t#nL*m+@0lu{qMF#nMA>rb4oJ>Jt($cQPwFVEF~>y_W`gj%N4 zeOE(Zg4svq|M+oG*RM~g?Ki+%`2ojYM4%BGDP!Q7@Ln>q?rU?FzxQNhf9UW&k%noF zAV8C&+^?r9_%~p0ljZ>p3Rb>BF+k|&AnN3!iU6%YI)7&2(sBG`z_;IJYl>&Y3Kk5d zR-)MEv_Ny5Q9lOh=5GSil9i^bI4&q6gj}b-Nw&4b+6x4kjCmR;uSrtijbINN%Q48C zFd@YT;ts0a)ZDacrnG+2Z|4~nbK22ENSP7Hi!u|u`dms@h{?EcAQB^Z^s&C21|gqL zu87}|wgT@J8Q?KXNqRJxM4AwrYF6(-Nyeui_wAkgVSbBqK_5r2S|2Bz$2`D82@sR7 zROn2?F1gO?$5`hYWY=GH{IXZt`(V8@Ib6I#ahd3@FO3zuAp0bp`IrlstAn#*zr{hGenL%zbp>jW6wdzhU055RL8GB*vQER#V z5VICjiC@h`i4AzSK)uYIgA@Gydzp4904vT1NS}7mKK1&1Pm%AC2?}>+cvgU)5hqKd zF==^*35b-6)^z3wE%%z^9LZ_b;KG-L?4#vbR*=T@tE=SE~!}*X?kj(Cx}ReX)cld_JT$cH}xARun$n zJExaJZOSc4=i3rPCG9fm*ZZ~HZ^p{+{DmYHOc=&m+F#u(J_(NG^LuS0COnQ?zff;I z9(4z#?*R%*B=T<=B#9Z@aW5IM&?h z3%qg$DpD7Z0<`Hi#VkN4#P*VZryb3JsYeb zB;i{!%>fR^{Wda?H538;f0ZAbZ&!)`mBkk~w}gJmb-R#D-p@J?IUwp!rfSs6YYdc| zaNUSqZJa8yvS%=CCcGbt`D~;qO%qb@3F@R%gs`A-j|H1tXIUb@ksT#Nc zgPBH0CW}5~*h|$I+5Pm#YeYFioFVGPapJye<)qx3l5=FzioJHqt4Atub9Bh0va<14k}T{)-1Wm(R(ky27THUq#K>;^IMlaZhP>kd9=|d33ct6 zMwl@azuS&+!j~M!Dj6w3baQ&QSfa5&R;&OVDME0!r}*;6F*lo`XOBMGcH=0GPz5)m z>R@wJv{j`gZiI{rrKZKdE5PHAIH6X^=!WHob4?W(<)85;UGZP3-Xu-*a9dF1J9eBA zVNa)9_=s{ZO4gA|%~o>fi=9k}0$-;SzFNaP+CSF->$+)onmpaynvXbrZiRHPOxRYsKI}s#&Fy1(1P$VC!5nbaE zMl1)DHf5J?Gb8I@hT0pSv2R9e^>uI0a^80lC}T08)aH>SL^S zQ)HE?9zBHXYDqHp47#DtjVO6<@nyTd)|DD{9Vl|MX(SLj^syrLkTe7$auA-Jc@ zy`%FvS3ffhoiR-e>NTS@BP8o5LpGrNvrz{x4Y%fR|@ zWJ0LYP`QDh9H+^{;uOwSX~#9{-hXEfqQRX-fCA_>dm|TovxmvXG{UNS#m>^ zT{mYY97-;S01ovv`FXtIl{Xj#zaU~_Ru{c~7Gg1xyv1p!Hl-Wd3Vlq-Uw^31`e;h? z?Vs#IE+foXJ6L%5en_yHx+A7AVEUR43VGJDmlx)O$S1`-EHB)R%FyJQ{8?j9qSNWL z6@A(Fhs)`WS)G0|(GvJ78~E-OH-<0yFewz^qAkE;ze{cve@Or&-3BGjCqSB^sR^c` z8$h~KKJ$jP*NLSzFl);`Lhg2^RJI73g!d+ifM1X=iL;}q@?d>oI56DjPOHG6uR8?IA|p!@W`9C&PC zdJOm@z5IW$1Wu@`4ZFCRsJ985c(9^)NnN>sU?89llCTUp@Hz+CRhT%=iic3`>a3qD z0FN&rX)ZOVRd!SD5@%DgLpK*7eOWzz@&e+g=2ryD_^z|F^nnvuT+2eMe>iqFcJ_V; z%=>OsVig-H&oQ6A3%}mw<_c8B#nT({@v^WeZ0VdHx0Psp)^I@|jSCfHMs`bKNiy+A zQ~VgAUYpd<+SM3kw;?F&Qq48YH7vARI%kbwj&UwY@zuc>=SvBG4pNX&;>oIN3;Ry; zXJL*n`N$n67HVKz|6u4{>=g|M>{(W{$ZGK;;ORQ>mc{*u=68{VEVp&k0htaZ2I;zC zuj?CMByltmIzJU1$zAp5EvewDG>~$0pw2GM~dGD&vUK9hgXx z(N!xYYXlH3U!6tcf&p&EtIr+nI>%B24!I5~{1MYTjl^WC1*T4C_(L3Ds@=CJr|wmr zs21Tbd>`HR20G|=yx%U)(-EDexkU<5xd#YQpCh$Mn=-Z#!l{PyDLG&Z0;$NE6%{QS zQvxUK7N*S;#id{EcwcNXY0;b8w==ZJpW1=u4cVa}#a@mHy`_F6m6SkXv#$ ze&q|hb6%u1a*~kN`2uMtL?NByGU?Lqyz0P@Y49fI*do&Po>k!yU*fjN@CGrB-MI74zP^->q_G$HVmgzr`JC^ScPpYy z$TNg6*~z8hg~{H>2qDzplpjlzA2r)GE+>9kpmQMVeoe1J1c;9?d31+ckN_*e zI7nnUIJMzFkF%PWXpa}jgAgMKva1JIrKCLJN(lWgC+#F^UXvA&a;HwDQ&BciM`7B} z;?E^GQ^U#(=|x`Ywy|6o1GM=RRpv}$d8%$ zSDu)#jZ*}O2_V%AtjL$!RC~s(m{*lZQM^;k=wsu!%6CKI&d$xjDOYHD93QviXl~;; zz$<7Hu(1Dy!?OgM!OEgfS2|t_EVLrt@d7Q*b$;wxn_}UBNM+qCzjSi34{3V7GaOu>@SU=e_fj8Ggcm z@_fB_5a=UqlL@5l(3@-ol3y^l~V+iBDQA+q;jIL=O4KcNS|6l^?y zg&GS^zKr>#rh50IM`|HoZ3<=q5*!0Nz<dqU*Wv!)+Kh%{0h$y*qSS(FCf#?4w>@%?e@ zS~&ix0oK|&ZExf$dX6vkJd!iggRX`N?0}%F6orAbcBqMK?(_L`S`NJLSReXj6lDc96l6$7p?1lV_Smoo5kO6;MT}r-lEnOW5}?lMnng zY4y}{(CSHN`&T;3lW5jr#2FC}?>sF!AV6_r_O1D?}-Qs36RWpm#t)>^7K z`>sEXvqMqN)ms@54*2ok;0xp+F17dnkO4}si{Ofade-j!9|Sm4UXS^96Fugf=-Uw@ z^)4ZXt$@S z{K|Cdn_e}wT@k9lm$3KD$mob)IW`-SRJ;+3BIR(uUcb)li(!cez>K0azzHrB39}ZZ*V7*hu z)3-X_kuFYS4N5H5>cywsi<9+xpJkIUdavl=^|=D#5t}3bGYMm$dm=iuGASIe1!hvl z-#%hY*jR_8ng*r5FRwlOnx@tKsjv*S)ynoz78m7vtJFO@-zXw>hFhGWNig=G^r2OL zWKX2wvs+*h71yzQB^an*V2%-bYG72@T!c<1xQ*?apn@*+iDmbZ*-mMct1fuH?<_o# zfJk@q$_HAQ*y6);--mycH9e1y zynK=yf)KO+R-tb#pc_Y`P853RPt$fmuc<2yFuC?oi@TgUsC?AjlQ}I`U#6lssdQdl z`Z#TN3@;^~(Nu7Y9hIF5*<@|X^XC$kvT~Nr$~7TL7mG~4&z%~LLlS@UrQ=txN{hi9t7N^6qTn8obx<9MCm^u98z>oU?yJ$mo3&_Cfz!%TvX$HzKL?)J7hg34Io0@SLBQ8l;b{n&hj8vx}Ae7 zoOv&0Q?M;;pPM(2_*;a_*sER3$pZnO)6w<3PYfb7!1{~>y0!e?-xS$U6E7fkcq-38$rf5XG-A;cH2abI8QIi$@!~to>ONQ4yU%iK(XbCmZy(ao`zbB zgi3eSe@}KLc%|0sMRuNpWrry+GXrn(oipqUd~7WC>)gyi*ccRoy9TFkZ=xz`eBv!5 zm2>*UQ0?d7T_zUT6||wND&<+t8s|too;up~M>vay3;IWp$liD2TBJtaTcsm~cH6`w z7WrY75^Jl@_Kadz0)qEwsJfH9*I-t+0Qc`QRQib+6xA*WQVmp+QY@=2-(UgztyT+~ znajE^>1+YAC^Z_U&va3hsCQ6Qa80p&WAykhxi_x{IAnf^&d5sxoUdR(&@jf@+KJSn zC=2aM`q?+#<-tXRB*fPQ1QN8$xaEwP1jc_ zNu76lxPoK@;GNba4KVT?c$Dt*+bi(9#RJ(4%(~u5)Q`0F>)EpuR^xugnL^xm{X*=` zL?^+V?hf0;f~;d4SrRu2i*TIW(T=_uBI(i*$9yXHtkb@8Ye*kBg_wnQYi5POVNVZ? z>c%Q*Awe(0K7D4v!U?WIBrjXwxI}>Kz@AfjIv$g<rTb+pUjKe-i2T(BU4!-^DFdzma4L9o7*Ua*10EHU4GA5Lz3}M3sIUn;B7I{ zAt}s@I&9U35EQH(#tD<}l{3lKHh^2FOw+_l#YR%e7H~FxY(-~B0aV5ZtjV*=%kzYF z2JpfOc4bol8`VBQhx=Zb?1QiDZ$Lh_`?(6bfS2%gD|5U`vnH`7C)*e2(oYnxww*@F z`Rr^CVAzxJck!H2i3XOV0fgBpmEPZ33-bI$ANTHv6+qmIugubv!CjjRjV{sN1dAik zLfc&u$DJ?S;fE)Yz|*;uB*lV&`MQ7334XOYL{{NZI!ySZ?@npTOz)rTbBpn)n}cDH z{zS{!BQilVT9>rl?HnAbPw9{NEnhCRvhY1klLQw&1*`#L=lSw|egj_eAV&F6M2P~` zK~<8l-r?qn7UBTRZ~J8>BFX+wy^#NdbUsmvfr95v4p_f`Zi9W*ap$8Ow1_ppB6xuc zhq6_GG0nBMCV`uCnwHpTbGSvD*n!u~6~Jq|6-MJ8`SpzK^HjgW2H|w z|G4(Oa2j0u{wmD)?JGT!j*?++uTqH>9fb7sSDG6h4utv9`!?)m`U+Aj(ZJZp_&*2U zF)4AYIXmd?w+%8qtBnW`*U|NOlvSs>1>Zh%qXhuar4;mvwYv~fE`3ImzXAJ1B#*(? zE!&1Dp3@ZD3DS8sY%d$7N#*P%f=))!wrwE9?&Y2{5q20bSBwc1iaA+H-Ab{NrrP>R z_Mw)83IR?olpfwsld%(lT>vk$;Z!rbwMBa>lg_ltU=DiSbZz#kFC&C|k(6;<4b;Lj zZ&#vlC6x*I9%8~BF3PuQxvH~n?BCGQs0gmR4!M(fyG1=TK$~l!jo)0OEfXidpcqGp zi4Q1+^w-ou!7-J+FEJ!&nURMs$|JI|cj1>|=A_~+wYpfY+NL7(_pKLp*KLwM4k4pj zJKxD<(nbegs18Pup@uYyrtKBg)HgH+Z`x=vWIg%7-E36vwO5}dy8>iJPLJqmC5cxc z+0vW6!m!VY!wk(&BO@&Dc^M2uFF`E&0leIU>5o~oZCvv8wr>mT zpKPuK@7OL|A~Ti7%tOywPH^C47aX`Y+`A-6Fb%V{q$}lQn|JicHiv-wVp68178JNH zg+widXH?|~M;|l4T{r*1PNaK5aR2zypmw_r7BI&Sui%pku3~$USR`yqyC_ zC$_H28N*wmI_n)ycl4Net0RL9LDDi}Z|d0MLG#-23cErv0}59eiBO_1|Ksmz|J81n zf3Nqq?f>?H|1Et0Zyg6sQ2-%j;&M?uCW*E18?f+T`H!vDxeBs?#Fd23KQ_LEfT59x zk?E*En`~(S&p%Gd|9j$}7W&&be|yf~==hsD{{NL8d5M`4e%wx!ennHVR3M{_VS@j&^)!jxj%hgL)UcG>xy(0YTAi| z0$P|>HrufHRj%n-5{)1P zZNY-Nab)Eq@#*GP=aCIox*hu`G)I6)CJgK1mrED$UqHzDh1P~}#}TH)jT`l5!NrV& zqw&mgL?f8(xGdcmI3ys=HQ9S1SK@+P(x_TX-H3A_$pdBtO9R1pVPx~-_W;4XBbA0p zQOpwX^n?>;iYYTEX*rH1a%B{>yDHKiu$t=PhML*5px0mXE|6`|lW1fx?Oxd8GQb+I zT$UvDQi7Hcj^>`Um*G)z_*z|O$`v)%Z>%t?@`FXLksvxkGTPyhAVqE|&7hoK?A?I1 zd+gqf;61DL)3{ZP&B(YoX0zTNw31!#pF5@w)vXQsg{cRtSY_y?ypqMoYIP30ZNl9s-`mRH; zst0jL0{`we0b`6uaG;ZyQ|YzldlKg)fn*!XK>(S4PqcV|KcC+UE5y-YS$Xo%h~{*O z!dT^zK4wEFNhhi@*W$8VUUc^6F8{kP#=iB>m)+r^xgTW!$o{O^bu?xtUrMng*qrAp z_VaT-uSrA!?B*7IYo{zunF8JYJcR9>E{LT0?zDNe!K#!?@1z4?sWt^h3-#>DhHx;o zje-vCO_zBN7iN(KjfJ0WvA3;fEdK_yFg=DyQYm&cWff133Kl5N?>;jQ%txUeviJ*b7ogLJ8a< zlwXg4Pk#)#z!brYJwfLqwbs}c5In8W#4EpL{Z}wTSmws!;OVZiOL0%iDkGsXUsLSY z(J%E#HATBByGg7bPhj2hq~8Iag8PcncA(iMYllyE0-0=9EptMu!(7=0HUdAt_qeP` zp$x!;$feDOxiQJj5-W(m@W)|Vsxc@1_@rXwy9*efOqn<|U57XBg=!-daw_lgqZhb=$W3qLD%Dt&8LqN6R`YYJPeE+M6m zQSz!FtpZ~!t2KY+3$4HTl+r_0`c*qsRz{>z=(1e6$ZzG_3B=`H>iE9hL3tk&GugU!HpA>oaLY znb8sS**^jJ#AhYeY2n#u@~i8>t-W>OvM=zZxd-XhW`9v4g}W_oJY3c>rWSul8hC=9 zwUUn`B}OE*c9w(P{fAh=@-HlBXU)5>bW&hM;1D8Z-ELI4CwVY1&Ur8gp$!t5#GXCz z+yU}ozn}jQWZGapkXh3m(YV+GIJ9Byk~n+%8&LhtZ@O%G$cH0}m{rg?PRouvic5s9 zZzEcWiPA}uBJE3o!(4k!3uG4QH$XnZe*f0ly4*A1d9J?ocPTpdChi1h!Wwk+*fd2E z^_IuU_z#mSLG~zLY@H>Qdi2(E*YUOR{uSX|q2^WH!^qY0)?|YBcFT_A@WSd>h+=YI zE%f8_g&_1nTi>o;x?Rh(y^oD2>`R$8uHPlU=ifw;8mFF9^<5#LsKg8!-wQ*@e(YLj zrItyR>+&9>>sP9-eg702V$3SsefT({|JXnI8n|N?I)Jsx8P&caM+uY&WAr5f##UHp zwbj4u`q3_~N$b_@x+@K^mtVCJ{MFEaw;ZYbSb*LTM%2&pguN>Ju4Ab z*$F@c{2)Y9+EWXdcyA}&QIAO)|4iH%x26Xq>rtUIr`_Sik@ET8OZDZU2fYOV=fbed z@RZG(=3?YFe5l36#3|Q0UgpRG3zfl4ZOLGR)D~wFsp=DGetDs#GSGYQFvZu=JSiou!)~gm1|$T-pd0Y%yjod zlaA;02ua+TLTd#?aYZ5sfgEdzIn&t%vx{SEDmGL%F0T)#)0h~o&0gl9j9(}y8vhiT zf93J>F&|ZKw?YJe;bZ?=vsivd&h?0DJ-DGMNoaqyA^w1;J}BGg^?ueh;0w|0=)YF! zP^n};hA48;b%{hYqfb|#SHqLq#A{<~b-Z3;C}5F!_1SuRtDr&h{cuuEuhfH?8z@xFQsVv5TeyD% zKGsdJe;M;){)YcO>MHCUxv}Emt#C|e#XdSGJO-?Mu^o7}ncSJ?o4?~RGh!d*3>hq^ z!%Cz)KS#(IUrrQGRcPV|5NP1!!hS_g3g|@gCj%jam8T1h8XAB#Wcn=cXXa4|>30R1 z+biV|8VEE>#2%^%$PM=`TQQuW9f4)AM3|r^-keOq5cwO`nYsZfG!2i7Omy1TT z4*#6CQvOw}L8r>qp3W5%0<&vcSb%k>?nP@|0z7yDLn7*s$pb|Mjzc5S%?bQHIwyK6 z#Z|zyb#3{!h8Ao@KnR)pi93cq20uZ`y!gr=^ASk&9^6% zeH9kBgsD3)_fjx^zs3Q>_h~N~NUq@QlRUq$gUY<#r(hraJ-y(N*Nr40d0G9_BYn?I zS$PD~xrVjY9eL*Yz(iAytGV4~bC{kvG^LOeG9%)Z0UB0ae-cFq3*7dB+I~jF*j^QjWzhWPjI^XoCa9Cyas(J~| z8owNtZwOuJs24g%XG)u`R6E_=T?0Rr&nVmou!_sN3!V}8P5fUd?9dj8vQ zm-5BCB-~O_n@f1VOiew?`MswKFA`=rZvuNq(w|q(yg_r6I?__LAr@8+zsQCRnw*J( z{AGqRII6|LVsQ$NZbKUF^mvQX5c*ihK_a;YS=&*x)|d+Tp1b+xq+A)OzQP|3{g^Sc z+7-K{8e@lB?A1c(NIG#UwnwvtT#gnp6ln!JdAS z7;;XCG&4U4B_31q4~j+h_eR8NqB=@#I(B*pE)9uZYjS4EYnn9B4ECEu^)0GqUx`nq` zm&wxsCXiRJU%)GTu(ocdT4aIH3DJUBh9oN`x?sYEqv!^r?17o3Mh9> z6Q04`n(`ya7u60%dytMy54l0IDI)>ZvcwuVU`RALlU}_&P;29*fg@ zXBc6AL{^Z?(SmV$Wj14(pRCjZ0;9;9c6?%Qs0_qJK87#5ZX1w+pmu`07^0j0eD;+l zbr7Ghw~10?gf(LR0FR1!KYZOGFQSUeWQen3r_3f*?nsW1=u|q&WNv->!FZR8_@(82 zO}*VC9zt1O{3685#m%YQYI&aDyc;eQ_^x+V<Ml8FeSfF94YbNbGRgEoLhze)@0C5H&%3?9x@m<5l_4=(9fVc3vT_>YdTE z_8e@Hz3qMW`wJ!cz>*>w@k5SX(oNvqHuUZ|w*C40>>;J{H`%#pA z9h={fePP(by+Kp4kX}rN+v#Q$ExNd*qfkrUOSpHeCo=R0Q+)Kg_q6L5X2_X|LOT1K zK2xJ|_CX4j_)!bS^fkH|%tabk#BxZG$`qfCo!^JYf$?dD)FKwLZy(C>PzXA|BM1!E zclz}cU=dxiX;p@i1xK{8VIiT5OOWu|y;QNErDwqSfXCppnyrHjI_;y;$LotzQAUSZ zze3v6_tdQ(i#-V69E189b`Ce2Okw;NNwa1PU+YiNkcNG3ftk+K)HT8$VFeHun3NHS z&b!ezdveC`-x0x3|J4ZF+!{71{2}S21w44FM}W^h$|bnhEp0W!wW`e&qBlklf!@7q zU!NyFPH$CRKyc8+Bn%*vuhS;m-E5r8o26ZdyUi|$O7&VmooAk9WNtRV+KiulHl#+o zi}SB>4Q-z}FWBe=Lrb-FmKo~#4^t9;R%)_dNgidZ|1o45VdyR`UtWA`;SF)0j@jVN z!#iSK%fbbyI(U_WhAv~Bgj-gnHs|#!K6xPbuujm9qQ8}Pp0G)d_XG~x$ovL`MGatl zg$p`m_K@}*XAf71*EqS^iyLj)A5(Cdi}zSU_5SG0)w zlREC`NDpV11E5=`ffmQXd2f~7e&*U411z0Xb>i96cKSc}a!FGHy_(ilOjd(Vs;-`y zWYOjX;5@4~U=e%bYXb}yDRmfU_1rKsJ=LiHLA@1~4@K}aCZ-hh-%#{bs^M+eEt|ZT zgzGk-w~CDKYGT_IF6wbCMKgZ7c+%K3WrGW7CsYI%YjwtCeR0O5nH)t+zs{TN9k8B|$F{j#G}XmdoBiXK{%g(rAHA6`io9nML)# zC4*>V)_-bl^`i9d$u!b-TQPtawEg!r^xH!MgMavCVqm2y!e^kvZL|H9I7;1O;Ky1O zLQlXxd}Opz=jcss=CFPqMNBCr`RM-`_2u7FUADyW#zkdaR?@91KC54{np@1FUw(GB zy%{-q9zVj@`0#c0=$5MOG?Cv@sZBu)vd_U;0k4xe=dhi>NP}0_2eQ#Lc z89m=2z+>oHd;sb1YsKBaURyqkSw*MIvT<7XX~Wd9J~qsJ??a~ea{)s%pC#aJ<aB zPz|T5M%Bwhj|4L{^-n&u@-~dtW1Jg(hZlB;T_-a+(E)NF6JMvV+99l^++j3f`)sRe zC)yzU3jIhW4bxKWUcm}GL>vOEuWS4mV-a7=x*u*UwRWc_Rc$Z20efH>2`Owm@e2kZ+GR#D4?dvWp-mW;qk;yAS1$Mn^!Yq7q+i66As2^+3VcP*-3Mso$rUqF*SUT z%;0e*bY0EN+4VX>T@Lzn!fHdTNvy~YUR|h&m+S~uf9WQ=PF;_*ouJ9^tHs>>xMHVV zX~?UfkD+BDj$dnkor|eb6P+){rfSz))4#G=nrA@lveX4O{B$89RKzYyas-(Y(G&Bg zCBm!hd6%E=LLqTp9(X#2Yi_PxL2$%wo4+tFI5RIO3v|mv@|(RZ)@6w1t62r8$8s zn4X5m-erK&p9kS~1R~mi3WJJ+w(45Pn;jTyV;p1kwe^;J9Q8?m)gtxB&V*n5(<|&h zn|J=lV=SyG#JkhF9=+Nsxjnh0rItB3O5+1669>~n_hJ^F#y4u~^5h{@5l?v1#-6`tU@e4S`< zL(mEY96w8mXSt3QcSi1P@NH!wuxn4}X9N#`9K6JjBQ*+7ziGMyya)tW8rre-P)Mn9j9tjF+Yk{!V*JI{L?i?FKtUz`P*aaf;{#UH_As*+^x z$vU9P3aE|_{2ks#B97ZR% zY9;YSDG0`rO;9lgtC>!y}i`Zn0+n`}2pN|vo^kT(JwSgmg&Kg!N-G~Ri( z(y_UsA>%>$d~XKKmPXY{jhX%-^l4@0Q$boZXJgKbFcn*pMYs$_bz zMv6&S@hP262YhS(7*p305LS83A^C>Bq?nxQ&>!N3NQI6y-u5NjAlde`t6r8j;$=Px zm1N(p^-?$uj~_yRo{6r5Uz7Yx1XauA7roqt#06Z`FsfGs z-Rqt(g*BR69vzI1LmD4b6s6>HIfl%6#7Wu~oag`RYVcZp9R`>cfOa!5+%nc_U z__ymj!&@MQss<#IV^TY=AIUY4)-a5BqsKcIGFp=#?(*Js>;8ue^SF zTiGa9;o**HVue!9bC_ysL6<|}tpRmU3kn9mE-uyu!S!AiTy9E6#g2Uy1tJVIqr@_& zn+itIa&6f%Q%K+RQhkq(lv}H^4q#m-(4`Hwqh{?f5WPN3M85kKIU|KY#P23~M?F*~ zji7w_tZBtIKyhD^FE(Ppr94@2lryz9Rlf~fxvyfhB>PFCw~nVwhCU5J@V6AjhcCCZ zeG!qyo@#vh$zRYpUx+@o`%B^*cIt=EFG{phxLkPSxEU|jiBViKG(RFd3&M&Fwy#w6 zcD~8v*`zQg`w0ve&t*45}2{s=R@LEQ=*?d=WaeQ!R)u z2Cw(-E=IO8?q*&|U`{$3MwNXtYtZ6JRP4b%71jL8#K``~&HJ##Xa0hF(+E}oDFm22 zlUBCtognY9@j{M$>MC&rXe>A~UeUu}rm1qq{t1=zAs+X!+9Yb>WMsVFG6$=aX*xL5 ze8&Q*$wSqN%rB9o5y6p(S<|Ukl|+?w!f1db5jfJn`F3k+{0G6lr#tQ-%2#L=w{)Fm zbv__6V8%IMN5E2Igv3_f6a#WVtPyt{$2_2eBzPW)CJ;1*;4qLCY%7|>o_C=4Fi%mO z-XiiCH)V@x=0D)hU0@Hb`A((UDfP=xthq!SB$`JO<2x2{e{dh67YBMboUFvtqX^kHlT zq6+)?q4Du7OL-5t5kU4cEKmLkr`b}@ujp?+lL<(@JAU(N-UIYm)I5uWU4;z-u>lLm zV*ckK);om*-4D?0{XMG^-9g4^f$xI7Vo#~_5OIy;l{3Yc-X!H~3;{yMpQe6Zd65_* z$u;V0jahPGdO|ppJ-cil&PhC(o*+eou$Xf8nNgJnn94Y~`G}M&%6{Ni?W6v&m4&FM z+~J7wxopjZt*vpm=hTS{U4{|0Jmqctp00)pn&Q-D4eDeVPBc_y><Ezjzb-?WerH zOKd5JInf8H)I$7+$=)BxGenID^3bXT9yZ4v%|2&#d@o6>=X@RM@N!EAFtb?z_W6>S ze^lo7v*SUMxhEICW&U>>-Xryg^{4elQXi?gh(;PS_@LE zR64tImmLzx8~ndKJ~?{sN|28=M-|pIM4{=gnW^snp_t*=@#?0EQaBjzY%EKDUAqYx zc$pFH?RUGQDbpD9lLVOldEhR=B6*XBH>!_p=Rlu|5Xs;tQ-~5cTnao+2qfEVs$rosIf@&`#2Za>xfL0o8<(gUKSDclL251ToO?y<(c8 zpKxvbbtf7RMfhzU&zroRpbLx-Z>zF$DBCe#BGKBE)uv;gaNNnZ9zA{WJhfAPiwJHC z7r@`iejkS*lupjLH7W1!Me(x=W>jT5THs)yGg(J_^>|l@^WhR2xi}Q|)cf>z+_F1N zI<;dl464$b&AS|^2+*kSS|UI1biWN6oVIc>c3{Vk7{Se`386a-%*t zV0kQC64cP)abMIwqq~5aOZlu$As8Ce+%W1&Qge3btB9y=wFUwsG^?T7i@B#3P;OK< zp6^nkroatCq(BGk@x?_OScU?!ERtLf4vjI6JiSvK!Y}T9Lj!F5pz?Jhft88~%M-u| zU8z_Nupo*dDjy+06HKOMihuY@@)-bts7+CoAk%2pGzm<@va-V1yBbU_^&vN*-u^!1 z(eN3SNxl*{sy9=1bPyS>PiM$A=DVHI!oUMUq(q`_@IxkV+-&Sp3sVMz$UUoNMtrx|1qs%5ZSO_`#-~Fd4_whjUD3Jsj`nu{%rU z?3MZD){tb)=`b?~^R96T#DuleM2Hee_xwK%$k$=T%BIQci?tNxjUQ^+{M>v27hQ

      E8kbR_FQo}A8mf-kj;DjW{zX{>3C?{e2Hr?y*zx_T@KGUfE> z6fDuCT<7RnJTervj`zhrHek6v*c2k@q$@RzhQtfYyq*xBk5Pt>8()XHrXl}5;LeX3 zP}RKJ1P7b#;;*vpc){;W$u^}D z&p$7AyO_7;>&_>08MZW7-I6`<#5&KW?6}uUU_T=|eA|_+6-%~Gc53mQyD?6>^(R|J z;veUi!C6+Wm*Rk})ab)2JVlr|7tUyW#{ZtFg54C@o|cq~DQA1SWV+k7%NO)SR%?44 z|Hov&s^9izosLOq(=&U$K;@0G2R=khytH-20WEd$`BbYr; z$lLPM8s$$aU)}xe4IE7O2RNta=i7XCl{-GCZ8A^6@$0h24iXz;e)Jtu-8S93>pw%n zp*!wH8{f%plHO|_vgyiKU8Q@@B8K-5TAiE7erk@K$o!Z((fz+CV;%Qdm0hk| zo8H`Nl#IObd#VJ}^uyo6#q8vpu067ytdX@)NheV-KSkYhkoB?VAx-Gb#FjS$ZIQ&(Kul4qR9!>9uX`1UC*d@M5&{6_bzN6WjTrW|!OK+MWB$HXMEGeZj)wG{57W z>&pYZL^Y2tTl58Fity5~!pxJ|PyaKp={9uU zIKy1Q+~;?QoyNvJmulP-AI;Uemfgk@^-Ao6V%apIiiE|JLat{5cUf&)y7=g#ABzL7 zy!)g1cki-Su4%lYi~Mc(%s3d^d+ze}zNz~kmR#1;d$M!WhvJNd!rV6B0;5m-e8mty zTRsNV(yrX!WfB`uG40iP-)%G3)HqF4y>oG)Hly3&&4QPU3eQ)){1f`{+BF-UTe;mj z#s`ADWBy+FI3Ywn&oGu3c)f@FW8cUGvC9?KQC16Lmj2$ulKx@ct1gBICf6>kf4{Kp z{== zN5bs*-~SnwN`pz@{RwH;|NUop=>{f2cPnfK-emVH5KMwDQOE`90u#XN6-Kp=2GeMg z7|l7OrNn57NT}jyIPmGG+>GVGVK}l*)fm+`8g8R$U^ESkrh(BkFq#I4P6PG-Zvp@T CkE@OV literal 0 HcmV?d00001 diff --git a/docs/zh_CN/CommunitySharings/NNI_AutoFeatureEng.md b/docs/zh_CN/CommunitySharings/NNI_AutoFeatureEng.md new file mode 100644 index 0000000000..ec932fc3ec --- /dev/null +++ b/docs/zh_CN/CommunitySharings/NNI_AutoFeatureEng.md @@ -0,0 +1,88 @@ +# 来自知乎的评论: - 作者 Garvin Li + +本文由 NNI 用户在知乎论坛上发表。 在这篇文章中,Garvin 分享了在使用 NNI 进行自动特征工程方面的体验。 我们认为本文对于有兴趣使用 NNI 进行特征工程的用户非常有用。 经作者许可,将原始文章摘编如下。 + +**原文**: [如何看待微软最新发布的AutoML平台NNI?作者 Garvin Li](https://www.zhihu.com/question/297982959/answer/964961829?utm_source=wechat_session&utm_medium=social&utm_oi=28812108627968&from=singlemessage&isappinstalled=0) + +## 01 AutoML概述 + +作者认为 AutoML 不光是调参,应该包含自动特征工程。AutoML 是一个系统化的体系,包括:自动特征工程(AutoFeatureEng)、自动调参(AutoTuning)、自动神经网络探索(NAS)等。 + +## 02 NNI 概述 + +NNI((Neural Network Intelligence)是一个微软的开源 AutoML 工具包,通过自动而有效的方法来帮助用户设计并调优机器学习模型,神经网络架构,或复杂系统的参数。 + +链接:[ https://github.com/Microsoft/nni](https://github.com/Microsoft/nni) + +我目前只学习了自动特征工程这一个模块,总体看微软的工具都有一个比较大的特点,技术可能不一定多新颖,但是设计都非常赞。 NNI 的 AutoFeatureENG 基本包含了用户对于 AutoFeatureENG 的一切幻想。在微软做 PD 应该挺幸福吧,底层的这些个框架的设计都极为合理。 + +## 03 细说NNI - AutoFeatureENG +> 本文使用了此项目: [https://github.com/SpongebBob/tabular_automl_NNI](https://github.com/SpongebBob/tabular_automl_NNI)。 + +新用户可以使用 NNI 轻松高效地进行 AutoFeatureENG。 使用是非常简单的,安装下文件中的 require,然后 pip install NNI。 + +![](https://pic3.zhimg.com/v2-8886eea730cad25f5ac06ef1897cd7e4_r.jpg) NNI把 AutoFeatureENG 拆分成 exploration 和 selection 两个模块。 exploration 主要是特征衍生和交叉,selection 讲的是如何做特征筛选。 + +## 04 特征 Exploration + +对于功能派生,NNI 提供了许多可自动生成新功能的操作,[列表](https://github.com/SpongebBob/tabular_automl_NNI/blob/master/AutoFEOp.md)如下: + +**count**:传统的统计,统计一些数据的出现频率 + +**target**:特征和目标列的一些映射特征 + +**embedding**:把特征看成句子,用 *word2vector* 的方式制作向量 + +**crosscount**:特征间除法,有点类似CTR + +**aggregete**:特征的 min/max/var/mean + +**nunique**:统计唯一特征的数量。 + +**histsta**:特征存储桶的统计信息,如直方图统计信息。 + +具体特征怎么交叉,哪一列和哪一列交叉,每一列特征用什么方式衍生呢?可以通过 **search_space. json** 这个文件控制。 + +![](https://pic1.zhimg.com/v2-3c3eeec6eea9821e067412725e5d2317_r.jpg) + +图片展示了定义搜索空间的过程。 NNI 为 1 阶运算提供计数编码,并为 2 阶运算提供聚合的统计(min max var mean median nunique)。 + +例如,希望以下列方式搜索列名称 {"C1"、"...","C26"} 上的频率编码(valuecount)功能的功能: + +![](https://github.com/JSong-Jia/Pic/blob/master/images/pic%203.jpg) + +可以在列 {"C1",...,"C26"} x {"C1",...,"C26"} 上定义交叉频率编码(交叉维度的值计数)方法: + +![](https://github.com/JSong-Jia/Pic/blob/master/images/pic%204.jpg) + +Exploration 的目的就是长生出新的特征。 在代码里可以用 **get_next_parameter** 的方式获取 tuning 的参数: +> RECEIVED_PARAMS = nni.get_next_parameter() + +## 05 特征 Selection + +为了避免特征泛滥的情况,避免过拟合,一定要有 Selection 的机制挑选特征。 在 NNI-AutoFeatureENG 的 Selection 中,主要使用了微软开发的梯度提升框架 LightGBM(Light Gradient Boosting Machine)。 + +![](https://pic2.zhimg.com/v2-7bf9c6ae1303692101a911def478a172_r.jpg) + +了解 xgboost 或者 GBDT 算法同学应该知道,这种树形结构的算法是很容易计算出每个特征对于结果的影响的。 所以使用 lightGBM 可以天然的进行特征筛选。 + +弊病就是,如果下游是个 *LR*(逻辑回归)这种线性算法,筛选出来的特征是否具备普适性。 + +![](https://pic4.zhimg.com/v2-d2f919497b0ed937acad0577f7a8df83_r.jpg) + +## 06 总结 + +NNI 的 AutoFeature 模块是给整个行业制定了一个教科书般的标准,告诉大家这个东西要怎么做,有哪些模块,使用起来非常方便。 但是如果只是基于这样简单的模式,不一定能达到很好的效果。 + +## 对 NNI 的建议 + +我觉得在Exploration方面可以引用一些 DNN(如:xDeepFM) 的特征组合方式,提取更高维度的特征。 + +在 Selection 方面可以有更多的智能化方案,比如可以基于下游的算法自动选择 Selection 机制。 + +总之 NNI 在设计曾给了我一些启发,还是一个挺好的开源项目,推荐给大家~ 建议 AI 研究人员使用它来加速研究。 + +大家用的时候如果是 Mac 电脑可能会遇到 gcc 的问题,因为开源项目自带的脚本是基于 gcc7 编译的, 可以用下面的方法绕过去: + +# brew install libomp + diff --git a/docs/zh_CN/CommunitySharings/community_sharings.rst b/docs/zh_CN/CommunitySharings/community_sharings.rst index 828ff48b4d..e549dba143 100644 --- a/docs/zh_CN/CommunitySharings/community_sharings.rst +++ b/docs/zh_CN/CommunitySharings/community_sharings.rst @@ -13,3 +13,4 @@ 超参调优算法的对比 TPE 的并行优化 使用 NNI 自动调优系统 + 来自知乎的评论:作者 Garvin Li diff --git a/docs/zh_CN/Compressor/Pruner.md b/docs/zh_CN/Compressor/Pruner.md index 0e7963c9d8..d564109149 100644 --- a/docs/zh_CN/Compressor/Pruner.md +++ b/docs/zh_CN/Compressor/Pruner.md @@ -335,5 +335,3 @@ pruner.compress() - **sparsity:** 卷积过滤器要修剪的百分比。 - **op_types:** 在 ActivationMeanRankFilterPruner 中仅支持 Conv2d。 - -*** \ No newline at end of file diff --git a/docs/zh_CN/Compressor/Quantizer.md b/docs/zh_CN/Compressor/Quantizer.md index d2a571f874..3d63a3b3b9 100644 --- a/docs/zh_CN/Compressor/Quantizer.md +++ b/docs/zh_CN/Compressor/Quantizer.md @@ -5,10 +5,9 @@ NNI Compressor 中的 Quantizer Naive Quantizer 将 Quantizer 权重默认设置为 8 位,可用它来测试量化算法。 ### 用法 -tensorflow ```python nni.compression.tensorflow.NaiveQuantizer(model_graph).compress() -``` pytorch -```python nni.compression.torch.NaiveQuantizer(model).compress() +```python +model = nni.compression.torch.NaiveQuantizer(model).compress() ``` *** @@ -45,7 +44,7 @@ quantizer.compress() 查看示例进一步了解 #### QAT Quantizer 的用户配置 -压缩算法所需的常见配置可在[通用配置](./Overview.md#User-configuration-for-a-compression-algorithm)中找到。 +压缩算法所需的常见配置可在[通用配置](./Overview.md#压缩算法中的用户配置)中找到。 此算法所需的配置: @@ -78,7 +77,7 @@ quantizer.compress() 查看示例进一步了解 #### DoReFa Quantizer 的用户配置 -压缩算法所需的常见配置可在[通用配置](./Overview.md#User-configuration-for-a-compression-algorithm)中找到。 +压缩算法所需的常见配置可在[通用配置](./Overview.md#压缩算法中的用户配置)中找到。 此算法所需的配置: @@ -114,7 +113,7 @@ model = quantizer.compress() 可以查看示例 [examples/model_compress/BNN_quantizer_cifar10.py](https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py) 了解更多信息。 #### BNN Quantizer 的用户配置 -压缩算法所需的常见配置可在[通用配置](./Overview.md#User-configuration-for-a-compression-algorithm)中找到。 +压缩算法所需的常见配置可在[通用配置](./Overview.md#压缩算法中的用户配置)中找到。 此算法所需的配置: diff --git a/docs/zh_CN/NAS/CDARTS.md b/docs/zh_CN/NAS/CDARTS.md new file mode 100644 index 0000000000..b4347127e7 --- /dev/null +++ b/docs/zh_CN/NAS/CDARTS.md @@ -0,0 +1,61 @@ +# CDARTS + +## 介绍 + +CDARTS 在搜索和评估网络之间构建了循环反馈机制。 首先,搜索网络会生成初始结构用于评估,以便优化评估网络的权重。 然后,通过分类中通过的标签,以及评估网络中特征蒸馏的正则化来进一步优化搜索网络中的架构。 重复上述循环来优化搜索和评估网路,从而使结构得到训练,成为最终的评估网络。 + +在 `CdartsTrainer` 的实现中,首先分别实例化了两个 Model 和 Mutator。 第一个 Model 被称为"搜索网络",使用 `RegularizedDartsMutator` 来进行变化。它与 `DartsMutator` 稍有差别。 第二个 Model 是“评估网络”,它里用前面搜索网络的 Mutator 来创建了一个离散的 Mutator,来每次采样一条路径。 Trainer 会交替训练 Model 和 Mutator。 如果对 Trainer 和 Mutator 的实现感兴趣,可参考[这里](#reference)。 + +## 重现结果 + +这是基于 NNI 平台的 CDARTS,该平台目前支持 CIFAR10 搜索和重新训练。 同时也支持 ImageNet 的搜索和重新训练,并有相应的接口。 在 NNI 上重现的结果略低于论文,但远高于原始 DARTS。 这里展示了在 CIFAR10 上的三个独立实验的结果。 + +| 运行 | 论文 | NNI | +| -- |:-----:|:-----:| +| 1 | 97.52 | 97.44 | +| 2 | 97.53 | 97.48 | +| 3 | 97.58 | 97.56 | + + +## 示例 + +[示例代码](https://github.com/microsoft/nni/tree/master/examples/nas/cdarts) + +```bash +#如果未克隆 NNI 代码。 如果代码已被克隆,请忽略此行并直接进入代码目录。 +git clone https://github.com/Microsoft/nni.git + +# 为分布式训练安装 apex +git clone https://github.com/NVIDIA/apex +cd apex +python setup.py install --cpp_ext --cuda_ext + +# 搜索最好的架构 +cd examples/nas/cdarts +bash run_search_cifar.sh + +# 训练最好的架构 +bash run_retrain_cifar.sh +``` + +## 参考 + +### PyTorch + +```eval_rst +.. autoclass:: nni.nas.pytorch.cdarts.CdartsTrainer + :members: + + .. automethod:: __init__ + +.. autoclass:: nni.nas.pytorch.cdarts.RegularizedDartsMutator + :members: + +.. autoclass:: nni.nas.pytorch.cdarts.DartsDiscreteMutator + :members: + + .. automethod:: __init__ + +.. autoclass:: nni.nas.pytorch.cdarts.RegularizedMutatorParallel + :members: +``` diff --git a/docs/zh_CN/NAS/DARTS.md b/docs/zh_CN/NAS/DARTS.md index 4f350efa9f..c092070dc4 100644 --- a/docs/zh_CN/NAS/DARTS.md +++ b/docs/zh_CN/NAS/DARTS.md @@ -1,4 +1,4 @@ -# NNI 中的 DARTS +# DARTS ## 介绍 @@ -6,13 +6,45 @@ 为了实现,作者在小批量中交替优化网络权重和架构权重。 还进一步探讨了使用二阶优化(unroll)来替代一阶,来提高性能的可能性。 -NNI 的实现基于[官方实现](https://github.com/quark0/darts)以及一个[第三方实现](https://github.com/khanrc/pt.darts)。 目前,在 CIFAR10 上从头训练的一阶和二阶优化均已实现。 +NNI 的实现基于[官方实现](https://github.com/quark0/darts)以及一个[第三方实现](https://github.com/khanrc/pt.darts)。 NNI 上的 DARTS 设计为可用于任何搜索空间。 与原始论文一样,为 CIFAR10 实现了 CNN 的搜索空间,来作为 DARTS 的实际示例。 ## 重现结果 -为了重现本文的结果,我们做了一阶和二阶优化的实验。 由于时间限制,我们仅从第二阶段重新训练了*一次**最佳架构*。 我们的结果目前与论文的结果相当。 稍后会增加更多结果 +上述示例旨在重现本文中的结果,我们进行了一阶和二阶优化实验。 由于时间限制,我们仅从第二阶段重新训练了*一次**最佳架构*。 我们的结果目前与论文的结果相当。 稍后会增加更多结果 -| | 论文中 | 重现 | -| ------------ | ------------- | ---- | -| 一阶 (CIFAR10) | 3.00 +/- 0.14 | 2.78 | -| 二阶(CIFAR10) | 2.76 +/- 0.09 | 2.89 | +| | 论文中 | 重现 | +| ----------- | ------------- | ---- | +| 一阶(CIFAR10) | 3.00 +/- 0.14 | 2.78 | +| 二阶(CIFAR10) | 2.76 +/- 0.09 | 2.89 | + +## 示例 + +### CNN 搜索空间 + +[示例代码](https://github.com/microsoft/nni/tree/master/examples/nas/darts) + +```bash +#如果未克隆 NNI 代码。 如果代码已被克隆,请忽略此行并直接进入代码目录。 +git clone https://github.com/Microsoft/nni.git + +# 搜索最好的架构 +cd examples/nas/darts +python3 search.py + +# 训练最好的架构 +python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json +``` + +## 参考 + +### PyTorch + +```eval_rst +.. autoclass:: nni.nas.pytorch.darts.DartsTrainer + :members: + + .. automethod:: __init__ + +.. autoclass:: nni.nas.pytorch.darts.DartsMutator + :members: +``` diff --git a/docs/zh_CN/NAS/ENAS.md b/docs/zh_CN/NAS/ENAS.md index c25b27bc9b..dcfa3ec060 100644 --- a/docs/zh_CN/NAS/ENAS.md +++ b/docs/zh_CN/NAS/ENAS.md @@ -1,7 +1,46 @@ -# NNI 中的 ENAS +# ENAS ## 介绍 论文 [Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/abs/1802.03268) 通过在子模型之间共享参数来加速 NAS 过程。 在 ENAS 中,Contoller 学习在大的计算图中搜索最有子图的方式来发现神经网络。 Controller 通过梯度策略训练,从而选择出能在验证集上有最大期望奖励的子图。 同时对与所选子图对应的模型进行训练,以最小化规范交叉熵损失。 -NNI 的实现基于 [Tensorflow 的官方实现](https://github.com/melodyguan/enas),包括了 CIFAR10 上的 Macro/Micro 搜索空间。 NNI 中从头训练的代码还未完成,当前还没有重现结果。 +NNI 基于官方的 [Tensorflow](https://github.com/melodyguan/enas) 实现,包括通用的强化学习的 Controller,以及能交替训练目标网络和 Controller 的 Trainer。 根据论文,也对 CIFAR10 实现了 Macro 和 Micro 搜索空间来展示如何使用 Trainer。 NNI 中从头训练的代码还未完成,当前还没有重现结果。 + +## 示例 + +### CIFAR10 Macro/Micro 搜索空间 + +[示例代码](https://github.com/microsoft/nni/tree/master/examples/nas/enas) + +```bash +#如果未克隆 NNI 代码。 如果代码已被克隆,请忽略此行并直接进入代码目录。 +git clone https://github.com/Microsoft/nni.git + +# 搜索最好的网络架构 +cd examples/nas/enas + +# 在 Macro 搜索空间中搜索 +python3 search.py --search-for macro + +# 在 Micro 搜索空间中搜索 +python3 search.py --search-for micro + +# 查看更多选项 +python3 search.py -h +``` + +## 参考 + +### PyTorch + +```eval_rst +.. autoclass:: nni.nas.pytorch.enas.EnasTrainer + :members: + + .. automethod:: __init__ + +.. autoclass:: nni.nas.pytorch.enas.EnasMutator + :members: + + .. automethod:: __init__ +``` diff --git a/docs/zh_CN/NAS/NasInterface.md b/docs/zh_CN/NAS/NasInterface.md index c7893036d9..dd3f98499f 100644 --- a/docs/zh_CN/NAS/NasInterface.md +++ b/docs/zh_CN/NAS/NasInterface.md @@ -98,7 +98,7 @@ trainer.export(file='./chosen_arch') 不同的 Trainer 可能有不同的输入参数,具体取决于其算法。 详细参数可参考具体的 [Trainer 代码](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch)。 训练完成后,可通过 `trainer.export()` 导出找到的最好的模型。 无需通过 `nnictl` 来启动 NNI Experiment。 -[这里](Overview.md#supported-one-shot-nas-algorithms)是所有支持的 Trainer。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/simple/train.py)是使用 NNI NAS API 的简单示例。 +[这里](Overview.md#支持的-one-shot-nas-算法)是所有支持的 Trainer。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/simple/train.py)是使用 NNI NAS API 的简单示例。 ### 经典分布式搜索 diff --git a/docs/zh_CN/NAS/Overview.md b/docs/zh_CN/NAS/Overview.md index 1474a4d788..fc6c734c81 100644 --- a/docs/zh_CN/NAS/Overview.md +++ b/docs/zh_CN/NAS/Overview.md @@ -6,93 +6,33 @@ 以此为动力,NNI 的目标是提供统一的体系结构,以加速NAS上的创新,并将最新的算法更快地应用于现实世界中的问题上。 -通过[统一的接口](./NasInterface.md),有两种方式进行架构搜索。 [第一种](#supported-one-shot-nas-algorithms)称为 one-shot NAS,基于搜索空间构建了一个超级网络,并使用 one-shot 训练来生成性能良好的子模型。 [第二种](./NasInterface.md#classic-distributed-search)是传统的搜索方法,搜索空间中每个子模型作为独立的 Trial 运行,将性能结果发给 Tuner,由 Tuner 来生成新的子模型。 +通过[统一的接口](./NasInterface.md),有两种方式进行架构搜索。 [一种](#supported-one-shot-nas-algorithms)称为 one-shot NAS,基于搜索空间构建了一个超级网络,并使用 one-shot 训练来生成性能良好的子模型。 [第二种](./NasInterface.md#经典分布式搜索)是传统的搜索方法,搜索空间中每个子模型作为独立的 Trial 运行,将性能结果发给 Tuner,由 Tuner 来生成新的子模型。 * [支持的 One-shot NAS 算法](#supported-one-shot-nas-algorithms) -* [使用 NNI Experiment 的经典分布式 NAS](./NasInterface.md#classic-distributed-search) +* [使用 NNI Experiment 的经典分布式 NAS](./NasInterface.md#经典分布式搜索) * [NNI NAS 编程接口](./NasInterface.md) ## 支持的 One-shot NAS 算法 NNI 现在支持以下 NAS 算法,并且正在添加更多算法。 用户可以重现算法或在自己的数据集上使用它。 鼓励用户使用 [NNI API](#use-nni-api) 实现其它算法,以使更多人受益。 -| 名称 | 算法简介 | -| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | -| [ENAS](#enas) | Efficient Neural Architecture Search via Parameter Sharing [参考论文](https://arxiv.org/abs/1802.03268) | -| [DARTS](#darts) | DARTS: Differentiable Architecture Search [参考论文](https://arxiv.org/abs/1806.09055) | -| [P-DARTS](#p-darts) | Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation [参考论文](https://arxiv.org/abs/1904.12760) | +| 名称 | 算法简介 | +| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [ENAS](ENAS.md) | [Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/abs/1802.03268). 在 ENAS 中,Contoller 学习在大的计算图中搜索最有子图的方式来发现神经网络。 它通过在子模型间共享参数来实现加速和出色的性能指标。 | +| [DARTS](DARTS.md) | [DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) 引入了一种在两级网络优化中使用的可微分算法。 | +| [P-DARTS](PDARTS.md) | [Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) 基于DARTS。 它引入了一种有效的算法,可在搜索过程中逐渐增加搜索的深度。 | +| [SPOS](SPOS.md) | 论文 [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) 构造了一个采用统一的路径采样方法来训练简化的超网络,并使用进化算法来提高搜索神经网络结构的效率。 | +| [CDARTS](CDARTS.md) | [Cyclic Differentiable Architecture Search](https://arxiv.org/abs/****) 在搜索和评估的网络见构建了循环反馈的机制。 通过引入的循环的可微分架构搜索框架将两个网络集成为一个架构。 | -注意,这些算法**不需要 nnictl**,独立运行,仅支持 PyTorch。 将来的版本会支持 Tensorflow 2.0。 +One-shot 算法**不需要 nnictl,可单独运行**。 只实现了 PyTorch 版本。 将来的版本会支持 Tensorflow 2.x。 -### 依赖项 +这是运行示例的一些常见依赖项。 PyTorch 需要高于 1.2 才能使用 `BoolTensor`. * NNI 1.2+ * tensorboard * PyTorch 1.2+ * git -### ENAS - -[Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/abs/1802.03268). 在 ENAS 中,Contoller 学习在大的计算图中搜索最有子图的方式来发现神经网络。 它通过在子模型间共享参数来实现加速和出色的性能指标。 - -#### 用法 - -NNI 中的 ENAS 还在开发中,当前仅支持在 CIFAR10 上 Macro/Micro 搜索空间的搜索阶段。 在 PTB 上从头开始训练及其搜索空间尚未完成。 [详细说明](ENAS.md)。 - -```bash -#如果未克隆 NNI 代码。 如果代码已被克隆,请忽略此行并直接进入代码目录。 -git clone https://github.com/Microsoft/nni.git - -# 搜索最好的网络架构 -cd examples/nas/enas - -# 在 Macro 搜索空间中搜索 -python3 search.py --search-for macro - -# 在 Micro 搜索空间中搜索 -python3 search.py --search-for micro - -# 查看更多选项 -python3 search.py -h -``` - -### DARTS - -[DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) 在算法上的主要贡献是,引入了一种在两级网络优化中使用的可微分算法。 [详细说明](DARTS.md)。 - -#### 用法 - -```bash -#如果未克隆 NNI 代码。 如果代码已被克隆,请忽略此行并直接进入代码目录。 -git clone https://github.com/Microsoft/nni.git - -# 搜索最好的架构 -cd examples/nas/darts -python3 search.py - -# 训练最好的架构 -python3 retrain.py --arc-checkpoint ./checkpoints/epoch_49.json -``` - -### P-DARTS - -[Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) 基于 [DARTS](#DARTS)。 它在算法上的主要贡献是引入了一种有效的算法,可在搜索过程中逐渐增加搜索的深度。 - -#### 用法 - -```bash -#如果未克隆 NNI 代码。 如果代码已被克隆,请忽略此行并直接进入代码目录。 -git clone https://github.com/Microsoft/nni.git - -# 搜索最好的架构 -cd examples/nas/pdarts -python3 search.py - -# 训练最好的架构,过程与 darts 相同。 -cd ../darts -python3 retrain.py --arc-checkpoint ../pdarts/checkpoints/epoch_2.json -``` - ## 使用 NNI API 注意,我们正在尝试通过统一的编程接口来支持各种 NAS 算法,当前处于试验阶段。 这意味着当前编程接口将来会有变化。 @@ -104,7 +44,7 @@ python3 retrain.py --arc-checkpoint ../pdarts/checkpoints/epoch_2.json 1. 在设计神经网络时,可能在层、子模型或连接上有多种选择,并且无法确定是其中一种或某些的组合的结果最好。 因此,需要简单的方法来表达候选的层或子模型。 2. 在神经网络上应用 NAS 时,需要统一的方式来表达架构的搜索空间,这样不必为不同的搜索算法来更改代码。 -NNI 提出的 API 在[这里](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch)。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/darts)包含了基于此 API 的 NAS 实现示例。 +NNI 提出的 API 在[这里](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch)。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/naive)包含了基于此 API 的 NAS 实现示例。 ## **参考和反馈** * 在 GitHub 中[提交此功能的 Bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md); diff --git a/docs/zh_CN/Release.md b/docs/zh_CN/Release.md index 8a65ee765e..eba498b2b6 100644 --- a/docs/zh_CN/Release.md +++ b/docs/zh_CN/Release.md @@ -1,5 +1,44 @@ # 更改日志 +## 发布 1.3 - 12/30/2019 + +### 主要功能 + +#### 支持神经网络架构搜索算法 + +* [单路径一次性](https://github.com/microsoft/nni/tree/v1.3/examples/nas/spos/)算法和示例 + +#### 模型压缩算法支持 + +* [知识蒸馏](https://github.com/microsoft/nni/blob/v1.3/docs/zh_CN/TrialExample/KDExample.md)算法和使用示例 +* Pruners + * [L2Filter Pruner](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Compressor/Pruner.md#l2filter-pruner) + * [ActivationAPoZRankFilterPruner](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Compressor/Pruner.md#activationapozrankfilterpruner) + * [ActivationMeanRankFilterPruner](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Compressor/Pruner.md#activationmeanrankfilterpruner) +* [BNN Quantizer](https://github.com/microsoft/nni/blob/v1.3/docs/zh_CN/Compressor/Quantizer.md#bnn-quantizer) + +#### 训练平台 + +* OpenPAI 的 NFS 支持 + + 从 OpenPAI v0.11开始,HDFS 不再用作默认存储,可将 NFS、AzureBlob 或其他存储用作默认存储。 在本次版本中,NNI 扩展了对 OpenPAI 最近改动的支持,可与 OpenPAI v0.11 及后续版本的默认存储集成。 + +* Kubeflow 更新适配 + + 适配 Kubeflow 0.7 对 tf-operator 的新支持。 + +### 工程(代码和生成自动化) + +* 启用 [ESLint](https://eslint.org/) 静态代码分析。 + +### 小改动和 Bug 修复 + +* 正确识别内置 Tuner 和定制 Tuner +* Dispatcher 基类的日志 +* 修复有时 Tuner、Assessor 的失败会终止 Experiment 的 Bug。 +* 修复本机作为远程计算机的[问题](https://github.com/microsoft/nni/issues/1852) +* SMAC Tuner 中 Trial 配置的去重 [ticket](https://github.com/microsoft/nni/issues/1364) + ## 发布 1.2 - 12/02/2019 ### 主要功能 @@ -30,7 +69,7 @@ - 文档 - 改进了 NNI API 文档,增加了更多的 docstring。 -### 修复的 Bug +### Bug 修复 - 修复当失败的 Trial 没有指标时,表格的排序问题。 -Issue #1773 - 页面切换时,保留选择的(最大、最小)状态。 -PR#1710 @@ -42,14 +81,14 @@ ### 主要功能 * 新 Tuner: [PPO Tuner](https://github.com/microsoft/nni/blob/v1.1/docs/zh_CN/Tuner/PPOTuner.md) -* [查看已停止的 Experiment](https://github.com/microsoft/nni/blob/v1.1/docs/zh_CN/Tutorial/Nnictl.md#view) +* [查看已停止的 Experiment](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Tutorial/Nnictl.md#view) * Tuner 可使用专门的 GPU 资源(参考[教程](https://github.com/microsoft/nni/blob/v1.1/docs/zh_CN/Tutorial/ExperimentConfig.md)中的 `gpuIndices` 了解详情) * 改进 WEB 界面 - Trial 详情页面可列出每个 Trial 的超参,以及开始结束时间(需要通过 "add column" 添加) - 优化大型 Experiment 的显示性能 - 更多示例 - [EfficientNet PyTorch 示例](https://github.com/ultmaster/EfficientNet-PyTorch) - - [Cifar10 NAS 示例](https://github.com/microsoft/nni/blob/v1.1/examples/trials/nas_cifar10/README_zh_CN.md) + - [Cifar10 NAS 示例](https://github.com/microsoft/nni/blob/v1.1/examples/trials/nas_cifar10/README.md) - [模型压缩工具包 - Alpha 发布](https://github.com/microsoft/nni/blob/v1.1/docs/zh_CN/Compressor/Overview.md):我们很高兴的宣布 NNI 的模型压缩工具包发布了。它还处于试验阶段,会根据使用反馈来改进。 诚挚邀请您使用、反馈,或更多贡献 ### 修复的 Bug @@ -62,26 +101,28 @@ ### 主要功能 * Tuners 和 Assessors - - - 支持自动特征生成和选择 -Issue#877 -PR #1387 + 提供自动特征接口 + 基于 Beam 搜索的 Tuner + [添加 Pakdd 示例](https://github.com/microsoft/nni/tree/master/examples/trials/auto-feature-engineering) - - 添加并行算法提高 TPE 在高并发下的性能。 -PR #1052 - - 为 hyperband 支持多阶段 -PR #1257 -- 训练平台 - - - 支持私有 Docker Registry -PR #755 - - * 改进 - * 增加 RestFUL API 的 Python 包装,支持通过代码获取指标的值 PR #1318 - * 新的 Python API : get_experiment_id(), get_trial_id() -PR #1353 -Issue #1331 & -Issue#1368 - * 优化 NAS 搜索空间 -PR #1393 + - 支持自动特征生成和选择 -Issue#877 -PR #1387 + + 提供自动特征接口 + + 基于 Beam 搜索的 Tuner + + [增加 Pakdd 示例](https://github.com/microsoft/nni/tree/master/examples/trials/auto-feature-engineering) + + 添加并行算法提高 TPE 在高并发下的性能。 -PR #1052 + + 为 hyperband 支持多阶段 -PR #1257 ++ 训练平台 + + - 支持私有 Docker Registry -PR #755 + + * 改进 + * 增加 RestFUL API 的 Python 包装,支持通过代码获取指标的值 PR #1318 + * 新的 Python API : get_experiment_id(), get_trial_id() -PR #1353 -Issue #1331 & -Issue#1368 + * 优化 NAS 搜索空间 -PR #1393 + 使用 _type 统一 NAS 搜索空间 -- "mutable_type"e + 更新随机搜索 Tuner - + 将 gpuNum 设为可选 -Issue #1365 - + 删除 OpenPAI 模式下的 outputDir 和 dataDir 配置 -Issue #1342 - + 在 Kubeflow 模式下创建 Trial 时,codeDir 不再被拷贝到 logDir -Issue #1224 + + 将 gpuNum 设为可选 -Issue #1365 + + 删除 OpenPAI 模式下的 outputDir 和 dataDir 配置 -Issue #1342 + + 在 Kubeflow 模式下创建 Trial 时,codeDir 不再被拷贝到 logDir -Issue #1224 + Web 门户和用户体验 - + - 在 Web 界面的搜索过程中显示最好指标的曲线 -Issue #1218 - 在多阶段 Experiment 中,显示参数列表的当前值 -Issue1210 -PR #1348 - 在 AddColumn 中增加 "Intermediate count" 选项。 -Issue #1210 @@ -90,12 +131,13 @@ - 在命令行中为 nnictl 命令增加详细文档的连接 -Issue #1260 - 用户体验改进:显示 Error 日志 -Issue #1173 - 文档 - + - 更新文档结构 -Issue #1231 - - [多阶段文档的改进](AdvancedFeature/MultiPhase.md) -Issue #1233 -PR #1242 + 增加配置示例 - - [Web 界面描述改进](Tutorial/WebUI.md) -PR #1419 + - [多阶段文档的改进](AdvancedFeature/MultiPhase.md) -Issue #1233 -PR #1242 + + 添加配置示例 + + [Web 界面描述改进](Tutorial/WebUI.md) -PR #1419 -### 修复的 Bug +### Bug 修复 * (Bug 修复)修复 0.9 版本中的链接 -Issue #1236 * (Bug 修复)自动完成脚本 @@ -116,20 +158,22 @@ ### 主要功能 -* 通用 NAS 编程接口 +* 生成 NAS 编程接口 * 为 NAS 接口添加 `enas-mode` 和 `oneshot-mode`:[PR #1201](https://github.com/microsoft/nni/pull/1201#issue-291094510) * [有 Matern 核的高斯 Tuner](Tuner/GPTuner.md) * 支持多阶段 Experiment - + * 为多阶段 Experiment 增加新的训练平台:pai 模式从 v0.9 开始支持多阶段 Experiment。 - * 为以下内置 Tuner 增加多阶段的功能: - * TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner。 - - 有关详细信息,参考[实现多阶段的 Tuner](AdvancedFeature/MultiPhase.md)。 + * 为以下内置 Tuner 增加多阶段的功能: + + + * TPE, Random Search, Anneal, Naïve Evolution, SMAC, Network Morphism, Metis Tuner。 + + 有关详细信息,参考[实现多阶段的 Tuner](AdvancedFeature/MultiPhase.md)。 * Web 界面 - + * 在 Web 界面中可比较 Trial。 有关详细信息,参考[查看 Trial 状态](Tutorial/WebUI.md) * 允许用户调节 Web 界面的刷新间隔。 有关详细信息,参考[查看概要页面](Tutorial/WebUI.md) * 更友好的显示中间结果。 有关详细信息,参考[查看 Trial 状态](Tutorial/WebUI.md) @@ -158,7 +202,7 @@ * 在已经运行非 NNI 任务的 GPU 上也能运行 Trial * 支持 Kubeflow v1beta2 操作符 * 支持 Kubeflow TFJob/PyTorchJob v1beta2 -* [通用 NAS 编程接口](AdvancedFeature/GeneralNasInterfaces.md) +* [通用 NAS 编程接口](https://github.com/microsoft/nni/blob/v0.8/docs/zh_CN/GeneralNasInterfaces.md) * 实现了 NAS 的编程接口,可通过 NNI Annotation 很容易的表达神经网络架构搜索空间 * 提供新命令 `nnictl trial codegen` 来调试 NAS 代码生成部分 * 提供 NAS 编程接口教程,NAS 在 MNIST 上的示例,用于 NAS 的可定制的随机 Tuner @@ -274,10 +318,10 @@ #### 支持新的 Tuner 和 Assessor -* 支持新的 [Metis Tuner](Tuner/MetisTuner.md)。 **在线**超参调优的场景下,Metis 算法已经被证明非常有效。 +* 支持新的 [Metis Tuner](Tuner/MetisTuner.md)。 对于**在线**超参调优的场景,Metis 算法已经被证明非常有效。 * 支持 [ENAS customized tuner](https://github.com/countif/enas_nni)。由 GitHub 社区用户所贡献。它是神经网络的搜索算法,能够通过强化学习来学习神经网络架构,比 NAS 的性能更好。 * 支持 [Curve fitting (曲线拟合)Assessor](Assessor/CurvefittingAssessor.md),通过曲线拟合的策略来实现提前终止 Trial。 -* 进一步支持 [Weight Sharing(权重共享)](AdvancedFeature/AdvancedNas.md):为 NAS Tuner 通过 NFS 来提供权重共享。 +* [权重共享的](https://github.com/microsoft/nni/blob/v0.5/docs/AdvancedNAS.md)高级支持:为 NAS Tuner 提供权重共享,当前支持 NFS。 #### 改进训练平台 @@ -361,12 +405,12 @@ ### NNICTL 的新功能和更新 * 支持同时运行多个 Experiment。 - + 在 v0.3 以前,NNI 仅支持一次运行一个 Experiment。 此版本开始,用户可以同时运行多个 Experiment。 每个 Experiment 都需要一个唯一的端口,第一个 Experiment 会像以前版本一样使用默认端口。 需要为其它 Experiment 指定唯一端口: - - ```bash - nnictl create --port 8081 --config - ``` + + ```bash + nnictl create --port 8081 --config + ``` * 支持更新最大 Trial 的数量。 使用 `nnictl update --help` 了解详情。 或参考 [NNICTL](Tutorial/Nnictl.md) 查看完整帮助。 @@ -375,15 +419,15 @@ * 不兼容的改动:nn.get_parameters() 改为 nni.get_next_parameter。 所有以前版本的示例将无法在 v0.3 上运行,需要重新克隆 NNI 代码库获取新示例。 如果在自己的代码中使用了 NNI,也需要相应的更新。 * 新 API **nni.get_sequence_id()**。 每个 Trial 任务都会被分配一个唯一的序列数字,可通过 nni.get_sequence_id() API 来获取。 - - ```bash - git clone -b v0.3 https://github.com/microsoft/nni.git - ``` + + ```bash + git clone -b v0.3 https://github.com/microsoft/nni.git + ``` * **nni.report_final_result(result)** API 对结果参数支持更多的数据类型。 - + 可用类型: - + * int * float * 包含有 'default' 键值的 dict,'default' 的值必须为 int 或 float。 dict 可以包含任何其它键值对。 @@ -394,11 +438,11 @@ ### 新示例 -* 公共的 NNI Docker 映像: - - ```bash - docker pull msranni/nni:latest - ``` +* 公开的 NNI Docker 映像: + + ```bash + docker pull msranni/nni:latest + ``` * 新的 Trial 示例:[NNI Sklearn 示例](https://github.com/microsoft/nni/tree/master/examples/trials/sklearn) diff --git a/docs/zh_CN/TrainingService/PaiYarnMode.md b/docs/zh_CN/TrainingService/PaiYarnMode.md index c84debfa55..0f930967a2 100644 --- a/docs/zh_CN/TrainingService/PaiYarnMode.md +++ b/docs/zh_CN/TrainingService/PaiYarnMode.md @@ -102,7 +102,7 @@ paiYarnConfig: ``` nnictl create --config exp_paiYarn.yml ``` -来在 paiYarn 模式下启动 Experiment。 NNI 会为每个 Trial 创建 OpenPAIYarn 作业,作业名称的格式为 `nni_exp_{experiment_id}_trial_{trial_id}`。 可以在 OpenPAIYarn 集群的网站中看到 NNI 创建的作业,例如: ![](../../img/nni_paiYarn_joblist.jpg) +来在 paiYarn 模式下启动 Experiment。 NNI 会为每个 Trial 创建 OpenPAIYarn 作业,作业名称的格式为 `nni_exp_{experiment_id}_trial_{trial_id}`。 可以在 OpenPAIYarn 集群的网站中看到 NNI 创建的作业,例如: ![](../../img/nni_pai_joblist.jpg) 注意:paiYarn 模式下,NNIManager 会启动 RESTful 服务,监听端口为 NNI 网页服务器的端口加1。 例如,如果网页端口为`8080`,那么 RESTful 服务器会监听在 `8081`端口,来接收运行在 Kubernetes 中的 Trial 作业的指标。 因此,需要在防火墙中启用端口 `8081` 的 TCP 协议,以允许传入流量。 diff --git a/docs/zh_CN/TrainingService/RemoteMachineMode.md b/docs/zh_CN/TrainingService/RemoteMachineMode.md index eba05921b5..e4b6917f84 100644 --- a/docs/zh_CN/TrainingService/RemoteMachineMode.md +++ b/docs/zh_CN/TrainingService/RemoteMachineMode.md @@ -1,8 +1,22 @@ -# 在多机上运行 Experiment +# 在远程计算机上运行 Experiment -NNI 支持通过 SSH 通道在多台计算机上运行 Experiment,称为 `remote` 模式。 NNI 需要这些计算机的访问权限,并假定已配置好了深度学习训练环境。 +NNI 可以通过 SSH 在多个远程计算机上运行同一个 Experiment,称为 `remote` 模式。 这就像一个轻量级的训练平台。 在此模式下,可以从计算机启动 NNI,并将 Trial 并行调度到远程计算机。 -例如:有三台服务器,登录账户为 `bob`(注意:账户不必在各台计算机上一致): +## 远程计算机的要求 + +* 仅支持 Linux 作为远程计算机,其[配置需求](../Tutorial/Installation.md)与 NNI 本机模式相同。 + +* 根据[安装文章](../Tutorial/Installation.md),在每台计算机上安装 NNI。 + +* 确保远程计算机满足 Trial 代码的环境要求。 如果默认环境不符合要求,可以将设置脚本添加到 NNI 配置的 `command` 字段。 + +* 确保远程计算机能被运行 `nnictl` 命令的计算机通过 SSH 访问。 同时支持 SSH 的密码和密钥验证方法。 有关高级用法,参考[配置](../Tutorial/ExperimentConfig.md)的 machineList 部分。 + +* 确保每台计算机上的 NNI 版本一致。 + +## 运行 Experiment + +例如,有三台机器,可使用用户名和密码登录。 | IP | 用户名 | 密码 | | -------- | --- | ------ | @@ -10,15 +24,9 @@ NNI 支持通过 SSH 通道在多台计算机上运行 Experiment,称为 `remo | 10.1.1.2 | bob | bob123 | | 10.1.1.3 | bob | bob123 | -## 设置 NNI 环境 +在这三台计算机或另一台能访问这些计算机的环境中安装并运行 NNI。 -按照[指南](../Tutorial/QuickStart.md)在每台计算机上安装 NNI。 - -## 运行 Experiment - -将 NNI 安装在可以访问上述三台计算机的网络的另一台计算机上,或者仅在三台计算机中的任何一台上运行 `nnictl` 即可启动 Experiment。 - -以 `examples/trials/mnist-annotation` 为例。 此处示例在 `examples/trials/mnist-annotation/config_remote.yml`: +以 `examples/trials/mnist-annotation` 为例。 示例文件 `examples/trials/mnist-annotation/config_remote.yml` 的内容如下: ```yaml authorName: default @@ -58,14 +66,8 @@ machineList: passwd: bob123 ``` -`codeDir` 中的文件会被自动上传到远程服务器。 可在不同的操作系统上运行 NNI (Windows, Linux, MacOS),来在远程机器上(仅支持 Linux)运行 Experiment。 +`codeDir` 中的文件会自动上传到远程计算机中。 可在 Windows、Linux 或 macOS 上运行以下命令,在远程 Linux 计算机上启动 Trial: ```bash nnictl create --config examples/trials/mnist-annotation/config_remote.yml -``` - -也可使用公钥/私钥对,而非用户名/密码进行身份验证。 有关高级用法,请参考[实验配置参考](../Tutorial/ExperimentConfig.md)。 - -## 版本校验 - -从 0.6 开始,NNI 支持版本校验,详情参考[这里](PaiMode.md)。 \ No newline at end of file +``` \ No newline at end of file diff --git a/docs/zh_CN/TrainingService/SupportTrainingService.md b/docs/zh_CN/TrainingService/SupportTrainingService.md index fbf6f6a2cd..5afcc13020 100644 --- a/docs/zh_CN/TrainingService/SupportTrainingService.md +++ b/docs/zh_CN/TrainingService/SupportTrainingService.md @@ -19,21 +19,22 @@ NNI 不仅提供了这些内置的训练平台,还提供了轻松连接自己 TrainingService 在设计上为了便于实现,将平台相关的公共属性抽象成类。用户只需要继承这个抽象类,并根据平台特点实现子类,便能够实现 TrainingService。 TrainingService 的声明如下: - abstract class TrainingService { - public abstract listTrialJobs(): Promise; - public abstract getTrialJob(trialJobId: string): Promise; - public abstract addTrialJobMetricListener(listener: (metric: TrialJobMetric) => void): void; - public abstract removeTrialJobMetricListener(listener: (metric: TrialJobMetric) => void): void; - public abstract submitTrialJob(form: JobApplicationForm): Promise; - public abstract updateTrialJob(trialJobId: string, form: JobApplicationForm): Promise; - public abstract get isMultiPhaseJobSupported(): boolean; - public abstract cancelTrialJob(trialJobId: string, isEarlyStopped?: boolean): Promise; - public abstract setClusterMetadata(key: string, value: string): Promise; - public abstract getClusterMetadata(key: string): Promise; - public abstract cleanUp(): Promise; - public abstract run(): Promise; - } - +```javascript +abstract class TrainingService { + public abstract listTrialJobs(): Promise; + public abstract getTrialJob(trialJobId: string): Promise; + public abstract addTrialJobMetricListener(listener: (metric: TrialJobMetric) => void): void; + public abstract removeTrialJobMetricListener(listener: (metric: TrialJobMetric) => void): void; + public abstract submitTrialJob(form: JobApplicationForm): Promise; + public abstract updateTrialJob(trialJobId: string, form: JobApplicationForm): Promise; + public abstract get isMultiPhaseJobSupported(): boolean; + public abstract cancelTrialJob(trialJobId: string, isEarlyStopped?: boolean): Promise; + public abstract setClusterMetadata(key: string, value: string): Promise; + public abstract getClusterMetadata(key: string): Promise; + public abstract cleanUp(): Promise; + public abstract run(): Promise; +} +``` TrainingService 的父类有一些抽象函数,用户需要继承父类并实现所有这些抽象函数。 有关如何实现 TrainingService 的更多信息,[参考这里](https://github.com/microsoft/nni/blob/master/docs/zh_CN/TrainingService/HowToImplementTrainingService.md)。 \ No newline at end of file diff --git a/docs/zh_CN/TrialExample/EfficientNet.md b/docs/zh_CN/TrialExample/EfficientNet.md new file mode 100644 index 0000000000..bf44c695ab --- /dev/null +++ b/docs/zh_CN/TrialExample/EfficientNet.md @@ -0,0 +1,21 @@ +# EfficientNet + +[EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) + +如论文中 3.3 所述,使用遍历搜索来找到 EfficientNet-B1 的 alpha, beta 和 gamma 的最好组合。 搜索空间,Tuner,配置示例如下。 + +## 说明 + +[示例代码](https://github.com/microsoft/nni/tree/master/examples/trials/efficientnet) + +1. 将示例代码目录设为当前工作目录。 +2. 运行 `git clone https://github.com/ultmaster/EfficientNet-PyTorch` 来克隆修改过的 [EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch)。 修改尽可能接近原始的 [TensorFlow 版本](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) (包括 EMA,标记平滑度等等。);另外添加了代码从 Tuner 获取参数并回调中间和最终结果。 将其 clone 至 `EfficientNet-PyTorch`;`main.py`,`train_imagenet.sh` 等文件会在配置文件中指定的路径。 +3. 运行 `nnictl create --config config_local.yml` (OpenPAI 可使用 `config_pai.yml`) 来找到最好的 EfficientNet-B1。 根据环境来调整训练平台(OpenPAI、本机、远程),batch size。 + +在 ImageNet 上的训练,可阅读 `EfficientNet-PyTorch/train_imagenet.sh`。 下载 ImageNet,并参考 [PyTorch 格式](https://pytorch.org/docs/stable/torchvision/datasets.html#imagenet) 来解压,然后将 `/mnt/data/imagenet` 替换为 ImageNet 的路径。 此文件也是如何将 ImageNet 挂载到 OpenPAI 容器的示例。 + +## 结果 + +下图展示了 acc@1 和 alpha、beta、gamma 之间的关系。 + +![](../../img/efficientnet_search_result.png) diff --git a/docs/zh_CN/TrialExample/KDExample.md b/docs/zh_CN/TrialExample/KDExample.md index 8f669b3d6d..ef91b9b905 100644 --- a/docs/zh_CN/TrialExample/KDExample.md +++ b/docs/zh_CN/TrialExample/KDExample.md @@ -30,4 +30,4 @@ for batch_idx, (data, target) in enumerate(train_loader): * **kd_teacher_model:** 预训练过的教师模型 * **kd_T:** 用于平滑教师模型输出的温度。 -完整代码可在这里找到 \ No newline at end of file +完整代码[在这里](https://github.com/microsoft/nni/tree/v1.3/examples/model_compress/knowledge_distill/)。 diff --git a/docs/zh_CN/TrialExample/SklearnExamples.md b/docs/zh_CN/TrialExample/SklearnExamples.md index 36f9b6fa67..e860358040 100644 --- a/docs/zh_CN/TrialExample/SklearnExamples.md +++ b/docs/zh_CN/TrialExample/SklearnExamples.md @@ -20,7 +20,7 @@ nnictl create --config ./config.yml 示例使用了数字数据集,它是由 1797 个 8x8 的图片组成,每个图片都是一个手写数字,目标是将图片分为 10 类。 -在这个示例中,使用 SVC 作为模型,并为此模型选择一些参数,包括 `"C", "keral", "degree", "gamma" 和 "coef0"`。 关于这些参数的更多信息,可参考[这里](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)。 +在这个示例中,使用 SVC 作为模型,并为此模型选择一些参数,包括 `"C", "kernel", "degree", "gamma" 和 "coef0"`。 关于这些参数的更多信息,可参考[这里](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)。 ### 2.2 回归 @@ -63,7 +63,7 @@ nnictl create --config ./config.yml ```json { "C": {"_type":"uniform","_value":[0.1, 1]}, - "keral": {"_type":"choice","_value":["linear", "rbf", "poly", "sigmoid"]}, + "kernel": {"_type":"choice","_value":["linear", "rbf", "poly", "sigmoid"]}, "degree": {"_type":"choice","_value":[1, 2, 3, 4]}, "gamma": {"_type":"uniform","_value":[0.01, 0.1]}, "coef0 ": {"_type":"uniform","_value":[0.01, 0.1]} @@ -75,7 +75,7 @@ nnictl create --config ./config.yml ```python params = { 'C': 1.0, - 'keral': 'linear', + 'kernel': 'linear', 'degree': 3, 'gamma': 0.01, 'coef0': 0.01 diff --git a/docs/zh_CN/Tutorial/FAQ.md b/docs/zh_CN/Tutorial/FAQ.md index bcd7fe7a25..7577248612 100644 --- a/docs/zh_CN/Tutorial/FAQ.md +++ b/docs/zh_CN/Tutorial/FAQ.md @@ -56,6 +56,10 @@ nnictl 在执行时,使用 tmp 目录作为临时目录来复制 codeDir 下 参考 [Windows 上使用 NNI](NniOnWindows.md)。 +### 更多常见问题解答 + +[标有常见问题标签的 Issue](https://github.com/microsoft/nni/labels/FAQ) + ### 帮助改进 在创建新问题前,请在 https://github.com/Microsoft/nni/issues 查看是否有人已经报告了相似的问题。 \ No newline at end of file diff --git a/docs/zh_CN/Tutorial/HowToDebug.md b/docs/zh_CN/Tutorial/HowToDebug.md index 580da25dcd..f2c9c72f8c 100644 --- a/docs/zh_CN/Tutorial/HowToDebug.md +++ b/docs/zh_CN/Tutorial/HowToDebug.md @@ -81,4 +81,4 @@ NNI 中有不同的错误类型。 根据严重程度,可分为三类。 当 N 如图,每个 Trial 都有日志路径,可以从中找到 Trial 的日志和 stderr。 -除了 Experiment 级调试之外,NNI 还提供调试单个 Trial 的功能,而无需启动整个 Experiment。 有关调试单个 Trial 代码的更多信息,请参考[独立运行模式](../TrialExample/Trials.md#standalone-mode-for-debug)。 \ No newline at end of file +除了 Experiment 级调试之外,NNI 还提供调试单个 Trial 的功能,而无需启动整个 Experiment。 有关调试单个 Trial 代码的更多信息,请参考[独立运行模式](../TrialExample/Trials.md#用于调试的独立模式)。 \ No newline at end of file diff --git a/docs/zh_CN/Tutorial/Installation.md b/docs/zh_CN/Tutorial/Installation.md index 830676ac1b..9a645cdc86 100644 --- a/docs/zh_CN/Tutorial/Installation.md +++ b/docs/zh_CN/Tutorial/Installation.md @@ -1,20 +1,22 @@ # 安装 NNI -当前支持在 Linux,Mac 和 Windows 下安装。 +当前支持在 Linux,macOS 和 Windows 下安装。 -## **在 Linux 和 Mac 下安装** +## 在 Linux 或 macOS 上安装 -* **通过 pip 命令安装 NNI** +* 通过 pip 命令安装 NNI - 先决条件:`python >= 3.5` + 先决条件:`python 64-bit >= 3.5` ```bash python3 -m pip install --upgrade nni ``` -* **通过源代码安装 NNI** +* 通过源代码安装 NNI - 先决条件:`python >=3.5`, `git`, `wget` + 如果对某个或最新版本的代码感兴趣,可通过源代码安装 NNI。 + + 先决条件:`python 64-bit >=3.5`, `git`, `wget` ```bash git clone -b v0.8 https://github.com/Microsoft/nni.git @@ -22,25 +24,27 @@ ./install.sh ``` -* **在 docker 映像中安装 NNI** +* 在 Docker 映像中使用 NNI 也可将 NNI 安装到 docker 映像中。 参考[这里](../deployment/docker/README.md)来生成 NNI 的 Docker 映像。 也可通过此命令从 Docker Hub 中直接拉取 NNI 的映像 `docker pull msranni/nni:latest`。 -## **在 Windows 上安装** +## 在 Windows 上安装 -推荐使用 Anaconda 或 Miniconda。 +强烈建议使用 Anaconda 或 Miniconda 来管理多个 Python 环境。 -* **通过 pip 命令安装 NNI** +* 通过 pip 命令安装 NNI - 先决条件:`python(64-bit) >= 3.5` + 先决条件:`python 64-bit >= 3.5` ```bash python -m pip install --upgrade nni ``` -* **通过源代码安装 NNI** +* 通过源代码安装 NNI + + 如果对某个或最新版本的代码感兴趣,可通过源代码安装 NNI。 - 先决条件:`python >=3.5`, `git`, `PowerShell` + 先决条件:`python 64-bit >=3.5`, `git`, `PowerShell` ```bash git clone -b v0.8 https://github.com/Microsoft/nni.git @@ -48,43 +52,104 @@ powershell -ExecutionPolicy Bypass -file install.ps1 ``` -## **系统需求** - -以下是 NNI 在 Linux 下的最低配置。 由于程序变更,NNI 的最低配置会有所更改。 - -| | 最低配置 | 推荐配置 | -| -------- | ------------------------------------- | ----------------------------------------- | -| **操作系统** | Ubuntu 16.04 或以上版本 | Ubuntu 16.04 或以上版本 | -| **CPU** | Intel® Core™ i3 或 AMD Phenom™ X3 8650 | Intel® Core™ i5 或 AMD Phenom™ II X3 或更高配置 | -| **GPU** | NVIDIA® GeForce® GTX 460 | NVIDIA® GeForce® GTX 660 或更高配置 | -| **内存** | 4 GB | 6 GB | -| **存储** | 30 GB 可用的磁盘空间 | | -| **网络** | 宽带连接 | | -| **分辨率** | 1024 x 768 以上 | | - -以下是 NNI 在 MacOS 下的最低配置。 由于程序变更,NNI 的最低配置会有所更改。 - -| | 最低配置 | 推荐配置 | -| -------- | -------------------------------------------------- | ------------------------ | -| **操作系统** | macOS 10.14.1 (最新版本) | macOS 10.14.1 (最新版本) | -| **CPU** | Intel® Core™ i5-760 或更高 | Intel® Core™ i7-4770 或更高 | -| **GPU** | NVIDIA® GeForce® GT 750M 或 AMD Radeon™ R9 M290 或更高 | AMD Radeon™ R9 M395X 或更高 | -| **内存** | 4 GB | 8 GB | -| **存储** | 70GB 可用空间及 7200 RPM 硬盘 | 70GB 可用空间 SSD 硬盘 | -| **网络** | 宽带连接 | | -| **分辨率** | 1024 x 768 以上 | | - -以下是 NNI 在 Windows 上的最低配置,推荐使用 Windows 10 1809 版。 由于程序变更,NNI 的最低配置会有所更改。 - -| | 最低配置 | 推荐配置 | -| -------- | ------------------------------------- | ----------------------------------------- | -| **操作系统** | Windows 10 | Windows 10 | -| **CPU** | Intel® Core™ i3 或 AMD Phenom™ X3 8650 | Intel® Core™ i5 或 AMD Phenom™ II X3 或更高配置 | -| **GPU** | NVIDIA® GeForce® GTX 460 | NVIDIA® GeForce® GTX 660 或更高配置 | -| **内存** | 4 GB | 6 GB | -| **存储** | 30 GB 可用的磁盘空间 | | -| **网络** | 宽带连接 | | -| **分辨率** | 1024 x 768 以上 | | +## 验证安装 + +以下示例基于 TensorFlow 1.x 。确保运行环境中使用的的是 ** TensorFlow 1.x**。 + +* 通过克隆源代码下载示例。 + + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` + +* 运行 MNIST 示例。 + + Linux 或 macOS + + ```bash + nnictl create --config nni/examples/trials/mnist-tfv1/config.yml + ``` + + Windows + + ```bash + nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml + ``` + +* 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 通过命令行输出的 `Web UI url` 来访问 Experiment 的界面。 + +```text +INFO: Starting restful server... +INFO: Successfully started Restful server! +INFO: Setting local config... +INFO: Successfully set local config! +INFO: Starting experiment... +INFO: Successfully started experiment! +----------------------------------------------------------------------- +The experiment id is egchD4qy +The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 +----------------------------------------------------------------------- + +You can use these commands to get more information about the experiment +----------------------------------------------------------------------- + commands description + +1. nnictl experiment show show the information of experiments +2. nnictl trial ls list all of trial jobs +3. nnictl top monitor the status of running experiments +4. nnictl log stderr show stderr log content +5. nnictl log stdout show stdout log content +6. nnictl stop stop an experiment +7. nnictl trial kill kill a trial job by id +8. nnictl --help get help information about nnictl +----------------------------------------------------------------------- +``` + +* 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](../Tutorial/WebUI.md)的更多页面。 + +![概述](../../img/webui_overview_page.png) + +![详细说明](../../img/webui_trialdetail_page.png) + +## 系统需求 + +由于程序变更,NNI 的最低配置会有所更改。 + +### Linux + +| | 推荐配置 | 最低配置 | +| -------- | ----------------------------------------- | ------------------------------------- | +| **操作系统** | Ubuntu 16.04 或以上版本 | | +| **CPU** | Intel® Core™ i5 或 AMD Phenom™ II X3 或更高配置 | Intel® Core™ i3 或 AMD Phenom™ X3 8650 | +| **GPU** | NVIDIA® GeForce® GTX 660 或更高配置 | NVIDIA® GeForce® GTX 460 | +| **内存** | 6 GB | 4 GB | +| **存储** | 30 GB 可用的磁盘空间 | | +| **网络** | 宽带连接 | | +| **分辨率** | 1024 x 768 以上 | | + +### macOS + +| | 推荐配置 | 最低配置 | +| -------- | ------------------------ | -------------------------------------------------- | +| **操作系统** | macOS 10.14.1 或更高版本 | | +| **CPU** | Intel® Core™ i7-4770 或更高 | Intel® Core™ i5-760 或更高 | +| **GPU** | AMD Radeon™ R9 M395X 或更高 | NVIDIA® GeForce® GT 750M 或 AMD Radeon™ R9 M290 或更高 | +| **内存** | 8 GB | 4 GB | +| **存储** | 70GB 可用空间 SSD 硬盘 | 70GB 可用空间及 7200 RPM 硬盘 | +| **网络** | 宽带连接 | | +| **分辨率** | 1024 x 768 以上 | | + +### Windows + +| | 推荐配置 | 最低配置 | +| -------- | ----------------------------------------- | ------------------------------------- | +| **操作系统** | Windows 10 1809 或更高版本 | | +| **CPU** | Intel® Core™ i5 或 AMD Phenom™ II X3 或更高配置 | Intel® Core™ i3 或 AMD Phenom™ X3 8650 | +| **GPU** | NVIDIA® GeForce® GTX 660 或更高配置 | NVIDIA® GeForce® GTX 460 | +| **内存** | 6 GB | 4 GB | +| **存储** | 30 GB 可用的磁盘空间 | | +| **网络** | 宽带连接 | | +| **分辨率** | 1024 x 768 以上 | | ## 更多 diff --git a/docs/zh_CN/Tutorial/Nnictl.md b/docs/zh_CN/Tutorial/Nnictl.md index acee5d4534..38b66d314b 100644 --- a/docs/zh_CN/Tutorial/Nnictl.md +++ b/docs/zh_CN/Tutorial/Nnictl.md @@ -49,6 +49,7 @@ nnictl 支持的命令: | --config, -c | True | | Experiment 的 YAML 配置文件 | | --port, -p | False | | RESTful 服务的端口 | | --debug, -d | False | | 设置为调试模式 | + | --watch, -w | False | | 启动为监视模式 | * 示例 @@ -97,6 +98,7 @@ nnictl 支持的命令: | id | True | | 要恢复的 Experiment 标识 | | --port, -p | False | | 要恢复的 Experiment 使用的 RESTful 服务端口 | | --debug, -d | False | | 设置为调试模式 | + | --watch, -w | False | | 启动为监视模式 | * 示例 diff --git a/docs/zh_CN/Tutorial/QuickStart.md b/docs/zh_CN/Tutorial/QuickStart.md index 3ed05f3e68..b886debf18 100644 --- a/docs/zh_CN/Tutorial/QuickStart.md +++ b/docs/zh_CN/Tutorial/QuickStart.md @@ -2,15 +2,15 @@ ## 安装 -当前支持 Linux,MacOS 和 Windows,在 Ubuntu 16.04 或更高版本,MacOS 10.14.1 以及 Windows 10.1809 上进行了测试。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。 +当前支持 Linux,macOS 和 Windows,在 Ubuntu 16.04 或更高版本,macOS 10.14.1 以及 Windows 10.1809 上进行了测试。 在 `python >= 3.5` 的环境中,只需要运行 `pip install` 即可完成安装。 -#### Linux 和 MacOS +**Linux 和 macOS** ```bash python3 -m pip install --upgrade nni ``` -#### Windows +**Windows** ```bash python -m pip install --upgrade nni @@ -18,7 +18,7 @@ 注意: -* 在 Linux 和 MacOS 上,如果要将 NNI 安装到当前用户的 home 目录中,可使用 `--user`,则不需要特殊权限。 +* 在 Linux 和 macOS 上,如果要将 NNI 安装到当前用户的 home 目录中,可使用 `--user`,则不需要特殊权限。 * 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](FAQ.md)。 * 参考[安装 NNI](Installation.md),来了解`系统需求`。 @@ -54,21 +54,22 @@ if __name__ == '__main__': NNI 用来帮助超参调优。它的流程如下: - 输入: 搜索空间, Trial 代码, 配置文件 - 输出: 一组最佳的超参配置 - - 1: For t = 0, 1, 2, ..., maxTrialNum, - 2: hyperparameter = 从搜索空间选择一组参数 - 3: final result = run_trial_and_evaluate(hyperparameter) - 4: 返回最终结果给 NNI - 5: If 时间达到上限, - 6: 停止实验 - 7: return 最好的实验结果 - +```text +输入: 搜索空间, Trial 代码, 配置文件 +输出: 一组最佳的超参配置 + +1: For t = 0, 1, 2, ..., maxTrialNum, +2: hyperparameter = 从搜索空间选择一组参数 +3: final result = run_trial_and_evaluate(hyperparameter) +4: 返回最终结果给 NNI +5: If 时间达到上限, +6: 停止实验 +7: return 最好的实验结果 +``` 如果需要使用 NNI 来自动训练模型,找到最佳超参,需要如下三步: -**使用 NNI 时的三个步骤** +**启动 Experiment 的三个步骤** **第一步**:定义 JSON 格式的`搜索空间`文件,包括所有需要搜索的超参的`名称`和`分布`(离散和连续值均可)。 @@ -140,7 +141,7 @@ trial: 上面的代码都已准备好,并保存在 [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1)。 -#### Linux 和 macOS +**Linux 和 macOS** 从命令行使用 **config.yml** 文件启动 MNIST Experiment 。 @@ -148,17 +149,17 @@ trial: nnictl create --config nni/examples/trials/mnist-tfv1/config.yml ``` -#### Windows +**Windows** 从命令行使用 **config_windows.yml** 文件启动 MNIST Experiment 。 -**注意**:如果使用 Windows,则需要在 config.yml 文件中,将 `python3` 改为 `python`,或者使用 config_windows.yml 来开始 Experiment。 +注意:如果使用 Windows,则需要在 config.yml 文件中,将 `python3` 改为 `python`,或者使用 config_windows.yml 来开始 Experiment。 ```bash nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml ``` -注意:**nnictl** 是一个命令行工具,用来控制 NNI Experiment,如启动、停止、继续 Experiment,启动、停止 NNIBoard 等等。 查看[这里](Nnictl.md),了解 `nnictl` 更多用法。 +注意:`nnictl` 是一个命令行工具,用来控制 NNI Experiment,如启动、停止、继续 Experiment,启动、停止 NNIBoard 等等。 查看[这里](Nnictl.md),了解 `nnictl` 更多用法。 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 期望的输出如下: @@ -201,7 +202,7 @@ Web 地址为:[IP 地址]:8080 在浏览器中打开 `Web 界面地址`(即:`[IP 地址]:8080`),就可以看到 Experiment 的详细信息,以及所有的 Trial 任务。 如果无法打开终端中的 Web 界面链接,可以参考 [FAQ](FAQ.md)。 -#### 查看概要页面 +### 查看概要页面 点击标签 "Overview"。 @@ -213,7 +214,7 @@ Experiment 相关信息会显示在界面上,配置和搜索空间等。 可 ![](../../img/QuickStart2.png) -#### 查看 Trial 详情页面 +### 查看 Trial 详情页面 点击 "Default Metric" 来查看所有 Trial 的点图。 悬停鼠标来查看默认指标和搜索空间信息。 diff --git a/docs/zh_CN/conf.py b/docs/zh_CN/conf.py index f1336f1c78..d5bec553af 100644 --- a/docs/zh_CN/conf.py +++ b/docs/zh_CN/conf.py @@ -47,6 +47,9 @@ 'sphinx.ext.napoleon', ] +# 添加示例模块 +autodoc_mock_imports = ['apex'] + # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] @@ -72,7 +75,7 @@ # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This pattern also affects html_static_path and html_extra_path. -exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] +exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'Release_v1.0.md'] # The name of the Pygments (syntax highlighting) style to use. pygments_style = None diff --git a/docs/zh_CN/examples.rst b/docs/zh_CN/examples.rst index f76ce9eb3d..95e0f94fee 100644 --- a/docs/zh_CN/examples.rst +++ b/docs/zh_CN/examples.rst @@ -11,3 +11,5 @@ EvolutionSQuAD<./TrialExample/SquadEvolutionExamples> GBDT<./TrialExample/GbdtExample> RocksDB <./TrialExample/RocksdbExamples> + KD 示例 <./TrialExample/KDExample> + EfficientNet <./TrialExample/EfficientNet> diff --git a/docs/zh_CN/model_compression.rst b/docs/zh_CN/model_compression.rst index 34d05b4844..2e273a79eb 100644 --- a/docs/zh_CN/model_compression.rst +++ b/docs/zh_CN/model_compression.rst @@ -18,7 +18,7 @@ NNI 中也内置了一些流程的模型压缩算法。 概述 Level Pruner AGP Pruner - L1Filter Pruner + L1Filter Pruner Slim Pruner Lottery Ticket Pruner FPGM Pruner diff --git a/docs/zh_CN/nas.rst b/docs/zh_CN/nas.rst index a7329dd60d..611c5aefe2 100644 --- a/docs/zh_CN/nas.rst +++ b/docs/zh_CN/nas.rst @@ -22,4 +22,6 @@ NAS 算法 NAS 接口 ENAS DARTS - P-DARTS + P-DARTS + SPOS + CDARTS diff --git a/docs/zh_CN/training_services.rst b/docs/zh_CN/training_services.rst index 4e2969e597..8e75af2ae7 100644 --- a/docs/zh_CN/training_services.rst +++ b/docs/zh_CN/training_services.rst @@ -6,5 +6,6 @@ NNI 支持的训练平台介绍 本机<./TrainingService/LocalMode> 远程<./TrainingService/RemoteMachineMode> OpenPAI<./TrainingService/PaiMode> + OpenPAI Yarn 模式<./TrainingService/PaiYarnMode> Kubeflow<./TrainingService/KubeflowMode> FrameworkController<./TrainingService/FrameworkControllerMode> diff --git a/examples/feature_engineering/auto-feature-engineering/README_zh_CN.md b/examples/feature_engineering/auto-feature-engineering/README_zh_CN.md index 55b50217cd..76cce132ff 100644 --- a/examples/feature_engineering/auto-feature-engineering/README_zh_CN.md +++ b/examples/feature_engineering/auto-feature-engineering/README_zh_CN.md @@ -1,8 +1,7 @@ -**NNI 中的自动特征工程** -=== + **NNI 中的自动特征工程** === -此[示例](https://github.com/SpongebBob/tabular_automl_NNI)在 NNI 中实现了自动特征工程。 + 此[示例](https://github.com/SpongebBob/tabular_automl_NNI)在 NNI 中实现了自动特征工程。 -代码来自于贡献者。 谢谢可爱的贡献者! + 代码来自于贡献者。 谢谢可爱的贡献者! -欢迎越来越多的人加入我们! \ No newline at end of file + 欢迎越来越多的人加入我们! diff --git a/examples/trials/auto-gbdt/config_pai.yml b/examples/trials/auto-gbdt/config_pai.yml index 7393a080a2..e4cd040aec 100644 --- a/examples/trials/auto-gbdt/config_pai.yml +++ b/examples/trials/auto-gbdt/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/auto-gbdt/config_paiYarn.yml b/examples/trials/auto-gbdt/config_paiYarn.yml new file mode 100644 index 0000000000..427a6eacd8 --- /dev/null +++ b/examples/trials/auto-gbdt/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_auto-gbdt +trialConcurrency: 1 +maxExecDuration: 10h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: minimize +trial: + command: python3 main.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/cifar10_pytorch/config_pai.yml b/examples/trials/cifar10_pytorch/config_pai.yml index 87d82ff097..97aac1e040 100644 --- a/examples/trials/cifar10_pytorch/config_pai.yml +++ b/examples/trials/cifar10_pytorch/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 diff --git a/examples/trials/cifar10_pytorch/config_paiYarn.yml b/examples/trials/cifar10_pytorch/config_paiYarn.yml new file mode 100644 index 0000000000..3ac750f536 --- /dev/null +++ b/examples/trials/cifar10_pytorch/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_pytorch_cifar10 +trialConcurrency: 1 +maxExecDuration: 100h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 main.py + codeDir: . + gpuNum: 1 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 diff --git a/examples/trials/efficientnet/README_zh_CN.md b/examples/trials/efficientnet/README_zh_CN.md index 2f4ac5e65f..083689141b 100644 --- a/examples/trials/efficientnet/README_zh_CN.md +++ b/examples/trials/efficientnet/README_zh_CN.md @@ -1,19 +1 @@ -# EfficientNet - -[EfficientNet: 重新思考卷积神经网络的模型尺度](https://arxiv.org/abs/1905.11946) - -这里提供了:使用遍历搜索为 EfficientNet-B1 找到最佳元组(alpha,beta,gamma)的搜索空间和 Tuner。参考[论文](https://arxiv.org/abs/1905.11946) 3.3。 - -## 说明 - -1. 设置此目录为当前目录。 -2. 运行 `git clone https://github.com/ultmaster/EfficientNet-PyTorch` 来 clone 修改过的 [EfficientNet-PyTorch](https://github.com/lukemelas/EfficientNet-PyTorch)。 修改尽可能接近原始的 [TensorFlow 版本](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) (包括 EMA,标记平滑度等等。);另外添加了代码从 Tuner 获取参数并回调中间和最终结果。 将其 clone 至 `EfficientNet-PyTorch`;`main.py`,`train_imagenet.sh` 等文件会在配置文件中指定的路径。 -3. 运行 `nnictl create --config config_net.yml` 来找到最好的 EfficientNet-B1。 根据环境来调整训练平台(OpenPAI、本机、远程),batch size。 - -在 ImageNet 上的训练,可阅读 `EfficientNet-PyTorch/train_imagenet.sh`。 下载 ImageNet,并参考 [PyTorch 格式](https://pytorch.org/docs/stable/torchvision/datasets.html#imagenet) 来解压,然后将 `/mnt/data/imagenet` 替换为 ImageNet 的路径。 此文件也是如何将 ImageNet 挂载到 OpenPAI 容器的示例。 - -## 结果 - -下图展示了 acc@1 和 alpha、beta、gamma 之间的关系。 - -![](assets/search_result.png) \ No newline at end of file +[文档](https://nni.readthedocs.io/en/latest/TrialExample/EfficientNet.html) \ No newline at end of file diff --git a/examples/trials/efficientnet/config_pai.yml b/examples/trials/efficientnet/config_pai.yml index 3ae75ef46c..d69634c846 100644 --- a/examples/trials/efficientnet/config_pai.yml +++ b/examples/trials/efficientnet/config_pai.yml @@ -21,8 +21,11 @@ trial: gpuNum: 1 virtualCluster: nni image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise nniManagerIp: paiConfig: userName: - passWord: + token: host: diff --git a/examples/trials/efficientnet/config_paiYarn.yml b/examples/trials/efficientnet/config_paiYarn.yml new file mode 100644 index 0000000000..5c39282211 --- /dev/null +++ b/examples/trials/efficientnet/config_paiYarn.yml @@ -0,0 +1,28 @@ +authorName: unknown +experimentName: example_efficient_net +trialConcurrency: 8 +maxExecDuration: 48h +maxTrialNum: 100 +trainingServicePlatform: paiYarn +searchSpacePath: search_net.json +useAnnotation: false +tuner: + codeDir: . + classFileName: tuner.py + className: FixedProductTuner + classArgs: + product: 2 +trial: + codeDir: EfficientNet-PyTorch + command: sh train_imagenet.sh + cpuNum: 4 + memoryMB: 25000 + shmMB: 25000 + gpuNum: 1 + virtualCluster: nni + image: msranni/nni:latest +nniManagerIp: +paiYarnConfig: + userName: + passWord: + host: diff --git a/examples/trials/ga_squad/config_pai.yml b/examples/trials/ga_squad/config_pai.yml index a2cfb8f381..1921274d32 100644 --- a/examples/trials/ga_squad/config_pai.yml +++ b/examples/trials/ga_squad/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 32869 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 diff --git a/examples/trials/ga_squad/config_paiYarn.yml b/examples/trials/ga_squad/config_paiYarn.yml new file mode 100644 index 0000000000..4bded4540e --- /dev/null +++ b/examples/trials/ga_squad/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_ga_squad +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +#choice: true, false +useAnnotation: false +#Your nni_manager ip +nniManagerIp: 10.10.10.10 +tuner: + codeDir: ../../tuners/ga_customer_tuner + classFileName: customer_tuner.py + className: CustomerTuner + classArgs: + optimize_mode: maximize +trial: + command: chmod +x ./download.sh && ./download.sh && python3 trial.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 32869 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 diff --git a/examples/trials/mnist-advisor/config_pai.yml b/examples/trials/mnist-advisor/config_pai.yml index b26b758f79..c04b15f614 100644 --- a/examples/trials/mnist-advisor/config_pai.yml +++ b/examples/trials/mnist-advisor/config_pai.yml @@ -27,10 +27,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 diff --git a/examples/trials/mnist-advisor/config_paiYarn.yml b/examples/trials/mnist-advisor/config_paiYarn.yml new file mode 100644 index 0000000000..192558a63d --- /dev/null +++ b/examples/trials/mnist-advisor/config_paiYarn.yml @@ -0,0 +1,36 @@ +authorName: default +experimentName: example_mnist_hyperband +maxExecDuration: 1h +maxTrialNum: 10000 +trialConcurrency: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +advisor: + #choice: Hyperband, BOHB + #(BOHB should be installed through nnictl) + builtinAdvisorName: Hyperband + classArgs: + #R: the maximum trial budget + R: 100 + #eta: proportion of discarded trials + eta: 3 + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 mnist.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 diff --git a/examples/trials/mnist-annotation/config_pai.yml b/examples/trials/mnist-annotation/config_pai.yml index f8a825defd..2f8b4d00a8 100644 --- a/examples/trials/mnist-annotation/config_pai.yml +++ b/examples/trials/mnist-annotation/config_pai.yml @@ -22,10 +22,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-annotation/config_paiYarn.yml b/examples/trials/mnist-annotation/config_paiYarn.yml new file mode 100644 index 0000000000..1a3299d606 --- /dev/null +++ b/examples/trials/mnist-annotation/config_paiYarn.yml @@ -0,0 +1,31 @@ +authorName: default +experimentName: example_mnist +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +#choice: true, false +useAnnotation: true +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 mnist.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-batch-tune-keras/config_pai.yml b/examples/trials/mnist-batch-tune-keras/config_pai.yml index 69c6dd5f61..79bea33f94 100644 --- a/examples/trials/mnist-batch-tune-keras/config_pai.yml +++ b/examples/trials/mnist-batch-tune-keras/config_pai.yml @@ -20,10 +20,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 diff --git a/examples/trials/mnist-batch-tune-keras/config_paiYarn.yml b/examples/trials/mnist-batch-tune-keras/config_paiYarn.yml new file mode 100644 index 0000000000..a81932285f --- /dev/null +++ b/examples/trials/mnist-batch-tune-keras/config_paiYarn.yml @@ -0,0 +1,29 @@ +authorName: default +experimentName: example_mnist-keras +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: BatchTuner +trial: + command: python3 mnist-keras.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 diff --git a/examples/trials/mnist-keras/config_pai.yml b/examples/trials/mnist-keras/config_pai.yml index aa08d0ee1c..392c53025a 100644 --- a/examples/trials/mnist-keras/config_pai.yml +++ b/examples/trials/mnist-keras/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-keras/config_paiYarn.yml b/examples/trials/mnist-keras/config_paiYarn.yml new file mode 100644 index 0000000000..4e5279a689 --- /dev/null +++ b/examples/trials/mnist-keras/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_mnist-keras +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 mnist-keras.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-pytorch/config_pai.yml b/examples/trials/mnist-pytorch/config_pai.yml index ac64bb4ce6..233ff8bdb6 100644 --- a/examples/trials/mnist-pytorch/config_pai.yml +++ b/examples/trials/mnist-pytorch/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-pytorch/config_paiYarn.yml b/examples/trials/mnist-pytorch/config_paiYarn.yml new file mode 100644 index 0000000000..d1aae75122 --- /dev/null +++ b/examples/trials/mnist-pytorch/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_mnist_pytorch +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 mnist.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-tfv1/config_pai.yml b/examples/trials/mnist-tfv1/config_pai.yml index c0bb710294..67df714a4b 100644 --- a/examples/trials/mnist-tfv1/config_pai.yml +++ b/examples/trials/mnist-tfv1/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/mnist-tfv1/config_paiYarn.yml b/examples/trials/mnist-tfv1/config_paiYarn.yml new file mode 100644 index 0000000000..886ee21c09 --- /dev/null +++ b/examples/trials/mnist-tfv1/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_mnist +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 mnist.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/nas_cifar10/config_paiYarn_ppo.yml b/examples/trials/nas_cifar10/config_paiYarn_ppo.yml new file mode 100644 index 0000000000..eb1fb8abc3 --- /dev/null +++ b/examples/trials/nas_cifar10/config_paiYarn_ppo.yml @@ -0,0 +1,31 @@ +authorName: Unknown +experimentName: enas_macro +trialConcurrency: 20 +maxExecDuration: 2400h +maxTrialNum: 20000 +#choice: local, remote +trainingServicePlatform: paiYarn +#choice: true, false +useAnnotation: true +multiPhase: false +versionCheck: false +nniManagerIp: 0.0.0.0 +tuner: + builtinTunerName: PPOTuner + classArgs: + optimize_mode: maximize + trials_per_update: 60 + epochs_per_update: 20 + minibatch_size: 6 +trial: + command: sh ./macro_cifar10_pai.sh + codeDir: ./ + gpuNum: 1 + cpuNum: 1 + memoryMB: 8196 + image: msranni/nni:latest + virtualCluster: nni +paiYarnConfig: + userName: your_account + passWord: your_passwd + host: 0.0.0.0 diff --git a/examples/trials/nas_cifar10/config_pai_ppo.yml b/examples/trials/nas_cifar10/config_pai_ppo.yml index 38156376bd..f5082d87d0 100644 --- a/examples/trials/nas_cifar10/config_pai_ppo.yml +++ b/examples/trials/nas_cifar10/config_pai_ppo.yml @@ -25,7 +25,10 @@ trial: memoryMB: 8196 image: msranni/nni:latest virtualCluster: nni + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: userName: your_account - passWord: your_pwd + token: your_token host: 0.0.0.0 diff --git a/examples/trials/network_morphism/FashionMNIST/config_pai.yml b/examples/trials/network_morphism/FashionMNIST/config_pai.yml index 3562d8dc82..db3d8be6d8 100644 --- a/examples/trials/network_morphism/FashionMNIST/config_pai.yml +++ b/examples/trials/network_morphism/FashionMNIST/config_pai.yml @@ -30,10 +30,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/network_morphism/FashionMNIST/config_paiYarn.yml b/examples/trials/network_morphism/FashionMNIST/config_paiYarn.yml new file mode 100644 index 0000000000..e15ec2ebec --- /dev/null +++ b/examples/trials/network_morphism/FashionMNIST/config_paiYarn.yml @@ -0,0 +1,39 @@ +authorName: default +experimentName: example_FashionMNIST-network-morphism +trialConcurrency: 1 +maxExecDuration: 24h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, NetworkMorphism + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: NetworkMorphism + classArgs: + #choice: maximize, minimize + optimize_mode: maximize + # for now, this tuner only supports cv domain + task: cv + #input image width + input_width: 28 + #input image channel + input_channel: 1 + #number of classes + n_output_node: 10 +trial: + command: python3 FashionMNIST_keras.py + codeDir: . + gpuNum: 1 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/network_morphism/cifar10/config_pai.yml b/examples/trials/network_morphism/cifar10/config_pai.yml index e14caab934..e2e39c7a4b 100644 --- a/examples/trials/network_morphism/cifar10/config_pai.yml +++ b/examples/trials/network_morphism/cifar10/config_pai.yml @@ -30,10 +30,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/network_morphism/cifar10/config_paiYarn.yml b/examples/trials/network_morphism/cifar10/config_paiYarn.yml new file mode 100644 index 0000000000..3367aa4e36 --- /dev/null +++ b/examples/trials/network_morphism/cifar10/config_paiYarn.yml @@ -0,0 +1,39 @@ +authorName: default +experimentName: example_cifar10-network-morphism +trialConcurrency: 1 +maxExecDuration: 24h +maxTrialNum: 10 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, NetworkMorphism + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: NetworkMorphism + classArgs: + #choice: maximize, minimize + optimize_mode: maximize + # for now, this tuner only supports cv domain + task: cv + #input image width + input_width: 32 + #input image channel + input_channel: 3 + #number of classes + n_output_node: 10 +trial: + command: python3 cifar10_keras.py + codeDir: . + gpuNum: 1 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/sklearn/classification/config_pai.yml b/examples/trials/sklearn/classification/config_pai.yml index d3ffdc8d74..6600894ccb 100644 --- a/examples/trials/sklearn/classification/config_pai.yml +++ b/examples/trials/sklearn/classification/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/sklearn/classification/config_paiYarn.yml b/examples/trials/sklearn/classification/config_paiYarn.yml new file mode 100644 index 0000000000..9bec9a4c50 --- /dev/null +++ b/examples/trials/sklearn/classification/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_sklearn +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 100 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner,MetisTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 main.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/sklearn/regression/config_pai.yml b/examples/trials/sklearn/regression/config_pai.yml index b6d84f2f6d..d4f7491b6e 100644 --- a/examples/trials/sklearn/regression/config_pai.yml +++ b/examples/trials/sklearn/regression/config_pai.yml @@ -23,10 +23,13 @@ trial: memoryMB: 8196 #The docker image to run nni job on pai image: msranni/nni:latest + nniManagerNFSMountPath: /home/user/mnt + containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise paiConfig: #The username to login pai userName: username - #The password to login pai - passWord: password + #The token to login pai + token: token #The host of restful server of pai host: 10.10.10.10 \ No newline at end of file diff --git a/examples/trials/sklearn/regression/config_paiYarn.yml b/examples/trials/sklearn/regression/config_paiYarn.yml new file mode 100644 index 0000000000..0e0e73a0ab --- /dev/null +++ b/examples/trials/sklearn/regression/config_paiYarn.yml @@ -0,0 +1,32 @@ +authorName: default +experimentName: example_sklearn +trialConcurrency: 1 +maxExecDuration: 1h +maxTrialNum: 100 +#choice: local, remote, pai +trainingServicePlatform: paiYarn +searchSpacePath: search_space.json +#choice: true, false +useAnnotation: false +tuner: + #choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner + #SMAC (SMAC should be installed through nnictl) + builtinTunerName: TPE + classArgs: + #choice: maximize, minimize + optimize_mode: maximize +trial: + command: python3 main.py + codeDir: . + gpuNum: 0 + cpuNum: 1 + memoryMB: 8196 + #The docker image to run nni job on pai + image: msranni/nni:latest +paiYarnConfig: + #The username to login pai + userName: username + #The password to login pai + passWord: password + #The host of restful server of pai + host: 10.10.10.10 \ No newline at end of file diff --git a/src/nni_manager/common/log.ts b/src/nni_manager/common/log.ts index 31a38d6e07..9bf2b92c88 100644 --- a/src/nni_manager/common/log.ts +++ b/src/nni_manager/common/log.ts @@ -4,13 +4,11 @@ 'use strict'; import * as fs from 'fs'; -import * as path from 'path'; import { Writable } from 'stream'; import { WritableStreamBuffer } from 'stream-buffers'; import { format } from 'util'; import * as component from '../common/component'; import { getExperimentStartupInfo, isReadonly } from './experimentStartupInfo'; -import { getLogDir } from './utils'; const FATAL: number = 1; const ERROR: number = 2; @@ -55,23 +53,21 @@ class BufferSerialEmitter { @component.Singleton class Logger { - private DEFAULT_LOGFILE: string = path.join(getLogDir(), 'nnimanager.log'); private level: number = INFO; - private bufferSerialEmitter: BufferSerialEmitter; - private writable: Writable; + private bufferSerialEmitter?: BufferSerialEmitter; + private writable?: Writable; private readonly: boolean = false; constructor(fileName?: string) { - let logFile: string | undefined = fileName; - if (logFile === undefined) { - logFile = this.DEFAULT_LOGFILE; + const logFile: string | undefined = fileName; + if (logFile) { + this.writable = fs.createWriteStream(logFile, { + flags: 'a+', + encoding: 'utf8', + autoClose: true + }); + this.bufferSerialEmitter = new BufferSerialEmitter(this.writable); } - this.writable = fs.createWriteStream(logFile, { - flags: 'a+', - encoding: 'utf8', - autoClose: true - }); - this.bufferSerialEmitter = new BufferSerialEmitter(this.writable); const logLevelName: string = getExperimentStartupInfo() .getLogLevel(); @@ -84,7 +80,9 @@ class Logger { } public close(): void { - this.writable.destroy(); + if (this.writable) { + this.writable.destroy(); + } } public trace(...param: any[]): void { @@ -128,12 +126,15 @@ class Logger { */ private log(level: string, param: any[]): void { if (!this.readonly) { - const buffer: WritableStreamBuffer = new WritableStreamBuffer(); - buffer.write(`[${(new Date()).toLocaleString()}] ${level} `); - buffer.write(format(param)); - buffer.write('\n'); - buffer.end(); - this.bufferSerialEmitter.feed(buffer.getContents()); + const logContent = `[${(new Date()).toLocaleString()}] ${level} ${format(param)}\n`; + if (this.writable && this.bufferSerialEmitter) { + const buffer: WritableStreamBuffer = new WritableStreamBuffer(); + buffer.write(logContent); + buffer.end(); + this.bufferSerialEmitter.feed(buffer.getContents()); + } else { + console.log(logContent); + } } } } diff --git a/src/nni_manager/main.ts b/src/nni_manager/main.ts index f707304382..51f964756e 100644 --- a/src/nni_manager/main.ts +++ b/src/nni_manager/main.ts @@ -6,6 +6,7 @@ import { Container, Scope } from 'typescript-ioc'; import * as fs from 'fs'; +import * as path from 'path'; import * as component from './common/component'; import { Database, DataStore } from './common/datastore'; import { setExperimentStartupInfo } from './common/experimentStartupInfo'; @@ -34,7 +35,7 @@ function initStartupInfo( setExperimentStartupInfo(createNew, expId, basePort, logDirectory, experimentLogLevel, readonly); } -async function initContainer(platformMode: string, logFileName?: string): Promise { +async function initContainer(foreground: boolean, platformMode: string, logFileName?: string): Promise { if (platformMode === 'local') { Container.bind(TrainingService) .to(LocalTrainingService) @@ -71,6 +72,12 @@ async function initContainer(platformMode: string, logFileName?: string): Promis Container.bind(DataStore) .to(NNIDataStore) .scope(Scope.Singleton); + const DEFAULT_LOGFILE: string = path.join(getLogDir(), 'nnimanager.log'); + if (foreground) { + logFileName = undefined; + } else if (logFileName === undefined) { + logFileName = DEFAULT_LOGFILE; + } Container.bind(Logger).provider({ get: (): Logger => new Logger(logFileName) }); @@ -81,7 +88,7 @@ async function initContainer(platformMode: string, logFileName?: string): Promis function usage(): void { console.info('usage: node main.js --port --mode \ - --start_mode --experiment_id '); + --start_mode --experiment_id --foreground '); } const strPort: string = parseArg(['--port', '-p']); @@ -90,6 +97,14 @@ if (!strPort || strPort.length === 0) { process.exit(1); } +const foregroundArg: string = parseArg(['--foreground', '-f']); +if (!('true' || 'false').includes(foregroundArg.toLowerCase())) { + console.log(`FATAL: foreground property should only be true or false`); + usage(); + process.exit(1); +} +const foreground: boolean = foregroundArg.toLowerCase() === 'true' ? true : false; + const port: number = parseInt(strPort, 10); const mode: string = parseArg(['--mode', '-m']); @@ -138,7 +153,7 @@ initStartupInfo(startMode, experimentId, port, logDir, logLevel, readonly); mkDirP(getLogDir()) .then(async () => { try { - await initContainer(mode); + await initContainer(foreground, mode); const restServer: NNIRestServer = component.get(NNIRestServer); await restServer.start(); const log: Logger = getLogger(); @@ -162,6 +177,15 @@ function getStopSignal(): any { } } +function getCtrlCSignal(): any { + return 'SIGINT'; +} + +process.on(getCtrlCSignal(), async () => { + const log: Logger = getLogger(); + log.info(`Get SIGINT signal!`); +}); + process.on(getStopSignal(), async () => { const log: Logger = getLogger(); let hasError: boolean = false; diff --git a/src/nni_manager/package.json b/src/nni_manager/package.json index 93e77cdf48..9b71067467 100644 --- a/src/nni_manager/package.json +++ b/src/nni_manager/package.json @@ -13,6 +13,7 @@ "azure-storage": "^2.10.2", "chai-as-promised": "^7.1.1", "child-process-promise": "^2.2.1", + "deepmerge": "^4.2.2", "express": "^4.16.3", "express-joi-validator": "^2.0.0", "js-base64": "^2.4.9", diff --git a/src/nni_manager/rest_server/restValidationSchemas.ts b/src/nni_manager/rest_server/restValidationSchemas.ts index a9ad8cfd9a..c7fa694fb9 100644 --- a/src/nni_manager/rest_server/restValidationSchemas.ts +++ b/src/nni_manager/rest_server/restValidationSchemas.ts @@ -38,6 +38,7 @@ export namespace ValidationSchemas { authFile: joi.string(), nniManagerNFSMountPath: joi.string().min(1), containerNFSMountPath: joi.string().min(1), + paiConfigPath: joi.string(), paiStoragePlugin: joi.string().min(1), nasMode: joi.string().valid('classic_mode', 'enas_mode', 'oneshot_mode', 'darts_mode'), portList: joi.array().items(joi.object({ diff --git a/src/nni_manager/training_service/pai/paiK8S/paiK8SConfig.ts b/src/nni_manager/training_service/pai/paiK8S/paiK8SConfig.ts index 70f175683e..26ad2901bd 100644 --- a/src/nni_manager/training_service/pai/paiK8S/paiK8SConfig.ts +++ b/src/nni_manager/training_service/pai/paiK8S/paiK8SConfig.ts @@ -31,10 +31,11 @@ export class NNIPAIK8STrialConfig extends TrialConfig { public readonly nniManagerNFSMountPath: string; public readonly containerNFSMountPath: string; public readonly paiStoragePlugin: string; + public readonly paiConfigPath?: string; constructor(command: string, codeDir: string, gpuNum: number, cpuNum: number, memoryMB: number, image: string, nniManagerNFSMountPath: string, containerNFSMountPath: string, - paiStoragePlugin: string, virtualCluster?: string) { + paiStoragePlugin: string, virtualCluster?: string, paiConfigPath?: string) { super(command, codeDir, gpuNum); this.cpuNum = cpuNum; this.memoryMB = memoryMB; @@ -43,5 +44,6 @@ export class NNIPAIK8STrialConfig extends TrialConfig { this.nniManagerNFSMountPath = nniManagerNFSMountPath; this.containerNFSMountPath = containerNFSMountPath; this.paiStoragePlugin = paiStoragePlugin; + this.paiConfigPath = paiConfigPath; } } diff --git a/src/nni_manager/training_service/pai/paiK8S/paiK8STrainingService.ts b/src/nni_manager/training_service/pai/paiK8S/paiK8STrainingService.ts index fc64d4dbdc..263009719c 100644 --- a/src/nni_manager/training_service/pai/paiK8S/paiK8STrainingService.ts +++ b/src/nni_manager/training_service/pai/paiK8S/paiK8STrainingService.ts @@ -44,6 +44,7 @@ import { PAIClusterConfig, PAITrialJobDetail } from '../paiConfig'; import { PAIJobRestServer } from '../paiJobRestServer'; const yaml = require('js-yaml'); +const deepmerge = require('deepmerge'); /** * Training Service implementation for OpenPAI (Open Platform for AI) @@ -59,6 +60,10 @@ class PAIK8STrainingService extends PAITrainingService { public async setClusterMetadata(key: string, value: string): Promise { switch (key) { + case TrialConfigMetadataKey.NNI_MANAGER_IP: + this.nniManagerIpConfig = JSON.parse(value); + break; + case TrialConfigMetadataKey.PAI_CLUSTER_CONFIG: this.paiJobRestServer = new PAIJobRestServer(component.get(PAIK8STrainingService)); this.paiClusterConfig = JSON.parse(value); @@ -185,7 +190,19 @@ class PAIK8STrainingService extends PAITrainingService { } } - return yaml.safeDump(paiJobConfig); + if (this.paiTrialConfig.paiConfigPath) { + try { + const additionalPAIConfig = yaml.safeLoad(fs.readFileSync(this.paiTrialConfig.paiConfigPath, 'utf8')); + //deepmerge(x, y), if an element at the same key is present for both x and y, the value from y will appear in the result. + //refer: https://github.com/TehShrike/deepmerge + const overwriteMerge = (destinationArray: any, sourceArray: any, options: any) => sourceArray; + return yaml.safeDump(deepmerge(additionalPAIConfig, paiJobConfig, { arrayMerge: overwriteMerge })); + } catch (error) { + this.log.error(`Error occurs during loading and merge ${this.paiTrialConfig.paiConfigPath} : ${error}`); + } + } else { + return yaml.safeDump(paiJobConfig); + } } protected async submitTrialJobToPAI(trialJobId: string): Promise { @@ -254,7 +271,7 @@ class PAIK8STrainingService extends PAITrainingService { this.log.info(`nniPAItrial command is ${nniPaiTrialCommand.trim()}`); const paiJobConfig = this.generateJobConfigInYamlFormat(trialJobId, nniPaiTrialCommand); - + this.log.debug(paiJobConfig); // Step 3. Submit PAI job via Rest call // Refer https://github.com/Microsoft/pai/blob/master/docs/rest-server/API.md for more detail about PAI Rest API const submitJobRequest: request.Options = { diff --git a/src/nni_manager/yarn.lock b/src/nni_manager/yarn.lock index 379af7c4b9..ae9c5f6d99 100644 --- a/src/nni_manager/yarn.lock +++ b/src/nni_manager/yarn.lock @@ -1112,6 +1112,11 @@ deepmerge@^2.1.1: version "2.2.1" resolved "https://registry.yarnpkg.com/deepmerge/-/deepmerge-2.2.1.tgz#5d3ff22a01c00f645405a2fbc17d0778a1801170" +deepmerge@^4.2.2: + version "4.2.2" + resolved "https://registry.yarnpkg.com/deepmerge/-/deepmerge-4.2.2.tgz#44d2ea3679b8f4d4ffba33f03d865fc1e7bf4955" + integrity sha512-FJ3UgI4gIl+PHZm53knsuSFpE+nESMr7M4v9QcgB7S63Kj/6WqMiFQJpBBYz1Pt+66bZpP3Q7Lye0Oo9MPKEdg== + default-require-extensions@^2.0.0: version "2.0.0" resolved "https://registry.yarnpkg.com/default-require-extensions/-/default-require-extensions-2.0.0.tgz#f5f8fbb18a7d6d50b21f641f649ebb522cfe24f7" diff --git a/src/sdk/pynni/nni/compression/torch/pruners.py b/src/sdk/pynni/nni/compression/torch/pruners.py index 82f37a488c..fb15d33315 100644 --- a/src/sdk/pynni/nni/compression/torch/pruners.py +++ b/src/sdk/pynni/nni/compression/torch/pruners.py @@ -113,7 +113,7 @@ def calc_mask(self, layer, config): if k == 0 or target_sparsity >= 1 or target_sparsity <= 0: return mask # if we want to generate new mask, we should update weigth first - w_abs = weight.abs() * mask + w_abs = weight.abs() * mask['weight'] threshold = torch.topk(w_abs.view(-1), k, largest=False)[0].max() new_mask = {'weight': torch.gt(w_abs, threshold).type_as(weight)} self.mask_dict.update({op_name: new_mask}) diff --git a/src/sdk/pynni/nni/medianstop_assessor/test.py b/src/sdk/pynni/nni/medianstop_assessor/test.py index bad19911c2..8c7d6d927b 100644 --- a/src/sdk/pynni/nni/medianstop_assessor/test.py +++ b/src/sdk/pynni/nni/medianstop_assessor/test.py @@ -31,11 +31,11 @@ def test(): # [1,1,1,1,1,1,1,1,1,1], # [1,1,1,1,1,1,1,1,1,1]] - assessor = MedianstopAssessor(FLAGS.start_step, FLAGS.optimize_mode) - for i in range(4): + assessor = MedianstopAssessor(FLAGS.optimize_mode, FLAGS.start_step) + for i in range(len(lcs)): #lc = [] to_complete = True - for k in range(10): + for k in range(len(lcs[0])): #d = random.randint(i*100+0, i*100+100) #lc.append(d) ret = assessor.assess_trial(i, lcs[i][:k+1]) diff --git a/src/sdk/pynni/nni/nas/pytorch/classic_nas/mutator.py b/src/sdk/pynni/nni/nas/pytorch/classic_nas/mutator.py index f1da69984d..a19b2c2a5a 100644 --- a/src/sdk/pynni/nni/nas/pytorch/classic_nas/mutator.py +++ b/src/sdk/pynni/nni/nas/pytorch/classic_nas/mutator.py @@ -68,6 +68,13 @@ def __init__(self, model): else: # get chosen arch from tuner self._chosen_arch = nni.get_next_parameter() + if self._chosen_arch is None: + if trial_env_vars.NNI_PLATFORM == "unittest": + # happens if NNI_PLATFORM is intentionally set, e.g., in UT + logger.warning("`NNI_PLATFORM` is set but `param` is None. Falling back to standalone mode.") + self._chosen_arch = self._standalone_generate_chosen() + else: + raise RuntimeError("Chosen architecture is None. This may be a platform error.") self.reset() def _sample_layer_choice(self, mutable, idx, value, search_space_item): @@ -169,6 +176,8 @@ def _standalone_generate_chosen(self): elif val["_type"] == INPUT_CHOICE: choices = val["_value"]["candidates"] n_chosen = val["_value"]["n_chosen"] + if n_chosen is None: + n_chosen = len(choices) chosen_arch[key] = {"_value": choices[:n_chosen], "_idx": list(range(n_chosen))} else: raise ValueError("Unknown key '%s' and value '%s'." % (key, val)) diff --git a/src/sdk/pynni/nni/nas/pytorch/darts/mutator.py b/src/sdk/pynni/nni/nas/pytorch/darts/mutator.py index b3a21f3a31..2aba20dd45 100644 --- a/src/sdk/pynni/nni/nas/pytorch/darts/mutator.py +++ b/src/sdk/pynni/nni/nas/pytorch/darts/mutator.py @@ -63,18 +63,23 @@ def sample_final(self): edges_max[mutable.key] = max_val result[mutable.key] = F.one_hot(index, num_classes=mutable.length).view(-1).bool() for mutable in self.mutables: - if isinstance(mutable, InputChoice) and mutable.n_chosen is not None: - weights = [] - for src_key in mutable.choose_from: - if src_key not in edges_max: - _logger.warning("InputChoice.NO_KEY in '%s' is weighted 0 when selecting inputs.", mutable.key) - weights.append(edges_max.get(src_key, 0.)) - weights = torch.tensor(weights) # pylint: disable=not-callable - _, topk_edge_indices = torch.topk(weights, mutable.n_chosen) - selected_multihot = [] - for i, src_key in enumerate(mutable.choose_from): - if i not in topk_edge_indices and src_key in result: - result[src_key] = torch.zeros_like(result[src_key]) # clear this choice to optimize calc graph - selected_multihot.append(i in topk_edge_indices) - result[mutable.key] = torch.tensor(selected_multihot, dtype=torch.bool, device=self.device()) # pylint: disable=not-callable + if isinstance(mutable, InputChoice): + if mutable.n_chosen is not None: + weights = [] + for src_key in mutable.choose_from: + if src_key not in edges_max: + _logger.warning("InputChoice.NO_KEY in '%s' is weighted 0 when selecting inputs.", mutable.key) + weights.append(edges_max.get(src_key, 0.)) + weights = torch.tensor(weights) # pylint: disable=not-callable + _, topk_edge_indices = torch.topk(weights, mutable.n_chosen) + selected_multihot = [] + for i, src_key in enumerate(mutable.choose_from): + if i not in topk_edge_indices and src_key in result: + # If an edge is never selected, there is no need to calculate any op on this edge. + # This is to eliminate redundant calculation. + result[src_key] = torch.zeros_like(result[src_key]) + selected_multihot.append(i in topk_edge_indices) + result[mutable.key] = torch.tensor(selected_multihot, dtype=torch.bool, device=self.device()) # pylint: disable=not-callable + else: + result[mutable.key] = torch.ones(mutable.n_candidates, dtype=torch.bool, device=self.device()) # pylint: disable=not-callable return result diff --git a/src/sdk/pynni/nni/nas/pytorch/fixed.py b/src/sdk/pynni/nni/nas/pytorch/fixed.py index 78a7980a81..0be4e0ea79 100644 --- a/src/sdk/pynni/nni/nas/pytorch/fixed.py +++ b/src/sdk/pynni/nni/nas/pytorch/fixed.py @@ -58,16 +58,16 @@ def _encode_tensor(data): return data -def apply_fixed_architecture(model, fixed_arc_path): +def apply_fixed_architecture(model, fixed_arc): """ - Load architecture from `fixed_arc_path` and apply to model. + Load architecture from `fixed_arc` and apply to model. Parameters ---------- model : torch.nn.Module Model with mutables. - fixed_arc_path : str - Path to the JSON that stores the architecture. + fixed_arc : str or dict + Path to the JSON that stores the architecture, or dict that stores the exported architecture. Returns ------- @@ -75,8 +75,8 @@ def apply_fixed_architecture(model, fixed_arc_path): Mutator that is responsible for fixes the graph. """ - if isinstance(fixed_arc_path, str): - with open(fixed_arc_path, "r") as f: + if isinstance(fixed_arc, str): + with open(fixed_arc) as f: fixed_arc = json.load(f) fixed_arc = _encode_tensor(fixed_arc) architecture = FixedArchitecture(model, fixed_arc) diff --git a/src/sdk/pynni/nni/nas/pytorch/utils.py b/src/sdk/pynni/nni/nas/pytorch/utils.py index 06961f8e80..3648425f20 100644 --- a/src/sdk/pynni/nni/nas/pytorch/utils.py +++ b/src/sdk/pynni/nni/nas/pytorch/utils.py @@ -20,6 +20,14 @@ def global_mutable_counting(): return _counter +def _reset_global_mutable_counting(): + """ + Reset the global mutable counting to count from 1. Useful when defining multiple models with default keys. + """ + global _counter + _counter = 0 + + def to_device(obj, device): """ Move a tensor, tuple, list, or dict onto device. diff --git a/src/sdk/pynni/tests/models/pytorch_models/__init__.py b/src/sdk/pynni/tests/models/pytorch_models/__init__.py new file mode 100644 index 0000000000..46d4482c86 --- /dev/null +++ b/src/sdk/pynni/tests/models/pytorch_models/__init__.py @@ -0,0 +1,6 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +from .mutable_scope import SpaceWithMutableScope +from .naive import NaiveSearchSpace +from .nested import NestedSpace diff --git a/src/sdk/pynni/tests/models/pytorch_models/mutable_scope.py b/src/sdk/pynni/tests/models/pytorch_models/mutable_scope.py new file mode 100644 index 0000000000..505a14880f --- /dev/null +++ b/src/sdk/pynni/tests/models/pytorch_models/mutable_scope.py @@ -0,0 +1,95 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import torch +import torch.nn as nn +import torch.nn.functional as F + +from nni.nas.pytorch.mutables import LayerChoice, InputChoice, MutableScope + + +class Cell(MutableScope): + def __init__(self, cell_name, prev_labels, channels): + super().__init__(cell_name) + self.input_choice = InputChoice(choose_from=prev_labels, n_chosen=1, return_mask=True, + key=cell_name + "_input") + self.op_choice = LayerChoice([ + nn.Conv2d(channels, channels, 3, padding=1), + nn.Conv2d(channels, channels, 5, padding=2), + nn.MaxPool2d(3, stride=1, padding=1), + nn.AvgPool2d(3, stride=1, padding=1), + nn.Identity() + ], key=cell_name + "_op") + + def forward(self, prev_layers): + chosen_input, chosen_mask = self.input_choice(prev_layers) + cell_out = self.op_choice(chosen_input) + return cell_out, chosen_mask + + +class Node(MutableScope): + def __init__(self, node_name, prev_node_names, channels): + super().__init__(node_name) + self.cell_x = Cell(node_name + "_x", prev_node_names, channels) + self.cell_y = Cell(node_name + "_y", prev_node_names, channels) + + def forward(self, prev_layers): + out_x, mask_x = self.cell_x(prev_layers) + out_y, mask_y = self.cell_y(prev_layers) + return out_x + out_y, mask_x | mask_y + + +class Layer(nn.Module): + def __init__(self, num_nodes, channels): + super().__init__() + self.num_nodes = num_nodes + self.nodes = nn.ModuleList() + node_labels = [InputChoice.NO_KEY, InputChoice.NO_KEY] + for i in range(num_nodes): + node_labels.append("node_{}".format(i)) + self.nodes.append(Node(node_labels[-1], node_labels[:-1], channels)) + self.final_conv_w = nn.Parameter(torch.zeros(channels, self.num_nodes + 2, channels, 1, 1), + requires_grad=True) + self.bn = nn.BatchNorm2d(channels, affine=False) + + def forward(self, pprev, prev): + prev_nodes_out = [pprev, prev] + nodes_used_mask = torch.zeros(self.num_nodes + 2, dtype=torch.bool, device=prev.device) + for i in range(self.num_nodes): + node_out, mask = self.nodes[i](prev_nodes_out) + nodes_used_mask[:mask.size(0)] |= mask.to(prev.device) + # NOTE: which device should we put mask on? + prev_nodes_out.append(node_out) + + unused_nodes = torch.cat([out for used, out in zip(nodes_used_mask, prev_nodes_out) if not used], 1) + unused_nodes = F.relu(unused_nodes) + conv_weight = self.final_conv_w[:, ~nodes_used_mask, :, :, :] + conv_weight = conv_weight.view(conv_weight.size(0), -1, 1, 1) + out = F.conv2d(unused_nodes, conv_weight) + return prev, self.bn(out) + + +class SpaceWithMutableScope(nn.Module): + def __init__(self, test_case, num_layers=4, num_nodes=5, channels=16, in_channels=3, num_classes=10): + super().__init__() + self.test_case = test_case + self.num_layers = num_layers + + self.stem = nn.Sequential( + nn.Conv2d(in_channels, channels, 3, 1, 1, bias=False), + nn.BatchNorm2d(channels) + ) + + self.layers = nn.ModuleList() + for _ in range(self.num_layers + 2): + self.layers.append(Layer(num_nodes, channels)) + self.gap = nn.AdaptiveAvgPool2d(1) + self.dense = nn.Linear(channels, num_classes) + + def forward(self, x): + prev = cur = self.stem(x) + for layer in self.layers: + prev, cur = layer(prev, cur) + + cur = self.gap(F.relu(cur)).view(x.size(0), -1) + return self.dense(cur) diff --git a/src/sdk/pynni/tests/models/pytorch_models/naive.py b/src/sdk/pynni/tests/models/pytorch_models/naive.py new file mode 100644 index 0000000000..0555ec17e4 --- /dev/null +++ b/src/sdk/pynni/tests/models/pytorch_models/naive.py @@ -0,0 +1,45 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import torch +import torch.nn as nn +import torch.nn.functional as F + +from nni.nas.pytorch.mutables import LayerChoice, InputChoice + + +class NaiveSearchSpace(nn.Module): + def __init__(self, test_case): + super().__init__() + self.test_case = test_case + self.conv1 = LayerChoice([nn.Conv2d(3, 6, 3, padding=1), nn.Conv2d(3, 6, 5, padding=2)]) + self.pool = nn.MaxPool2d(2, 2) + self.conv2 = LayerChoice([nn.Conv2d(6, 16, 3, padding=1), nn.Conv2d(6, 16, 5, padding=2)], + return_mask=True) + self.conv3 = nn.Conv2d(16, 16, 1) + + self.skipconnect = InputChoice(n_candidates=1) + self.skipconnect2 = InputChoice(n_candidates=2, return_mask=True) + self.bn = nn.BatchNorm2d(16) + + self.gap = nn.AdaptiveAvgPool2d(1) + self.fc = nn.Linear(16, 10) + + def forward(self, x): + bs = x.size(0) + + x = self.pool(F.relu(self.conv1(x))) + x0, mask = self.conv2(x) + self.test_case.assertEqual(mask.size(), torch.Size([2])) + x1 = F.relu(self.conv3(x0)) + + _, mask = self.skipconnect2([x0, x1]) + x0 = self.skipconnect([x0]) + if x0 is not None: + x1 += x0 + x = self.pool(self.bn(x1)) + self.test_case.assertEqual(mask.size(), torch.Size([2])) + + x = self.gap(x).view(bs, -1) + x = self.fc(x) + return x diff --git a/src/sdk/pynni/tests/models/pytorch_models/nested.py b/src/sdk/pynni/tests/models/pytorch_models/nested.py new file mode 100644 index 0000000000..71e1ccf2c3 --- /dev/null +++ b/src/sdk/pynni/tests/models/pytorch_models/nested.py @@ -0,0 +1,34 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import torch.nn as nn +import torch.nn.functional as F + +from nni.nas.pytorch.mutables import LayerChoice, InputChoice + + +class MutableOp(nn.Module): + def __init__(self, kernel_size): + super().__init__() + self.conv = nn.Conv2d(3, 120, kernel_size, padding=kernel_size // 2) + self.nested_mutable = InputChoice(n_candidates=10) + + def forward(self, x): + return self.conv(x) + + +class NestedSpace(nn.Module): + # this doesn't pass tests + def __init__(self, test_case): + super().__init__() + self.test_case = test_case + self.conv1 = LayerChoice([MutableOp(3), MutableOp(5)]) + self.gap = nn.AdaptiveAvgPool2d(1) + self.fc1 = nn.Linear(120, 10) + + def forward(self, x): + bs = x.size(0) + x = F.relu(self.conv1(x)) + x = self.gap(x).view(bs, -1) + x = self.fc(x) + return x diff --git a/src/sdk/pynni/tests/test_nas.py b/src/sdk/pynni/tests/test_nas.py new file mode 100644 index 0000000000..53b52541ad --- /dev/null +++ b/src/sdk/pynni/tests/test_nas.py @@ -0,0 +1,106 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +import importlib +import os +import sys +from unittest import TestCase, main + +import torch +import torch.nn as nn +from nni.nas.pytorch.classic_nas import get_and_apply_next_architecture +from nni.nas.pytorch.darts import DartsMutator +from nni.nas.pytorch.enas import EnasMutator +from nni.nas.pytorch.fixed import apply_fixed_architecture +from nni.nas.pytorch.random import RandomMutator +from nni.nas.pytorch.utils import _reset_global_mutable_counting + + +class NasTestCase(TestCase): + + def setUp(self): + self.default_input_size = [3, 32, 32] + self.model_path = os.path.join(os.path.dirname(__file__), "models") + sys.path.append(self.model_path) + self.model_module = importlib.import_module("pytorch_models") + self.default_cls = [self.model_module.NaiveSearchSpace, self.model_module.SpaceWithMutableScope] + self.cuda_test = [0] + if torch.cuda.is_available(): + self.cuda_test.append(1) + if torch.cuda.device_count() > 1: + self.cuda_test.append(torch.cuda.device_count()) + + def tearDown(self): + sys.path.remove(self.model_path) + + def iterative_sample_and_forward(self, model, mutator=None, input_size=None, n_iters=20, test_backward=True, + use_cuda=False): + if input_size is None: + input_size = self.default_input_size + # support pytorch only + input_size = [8 if use_cuda else 2] + input_size # at least 2 samples to enable batch norm + for _ in range(n_iters): + for param in model.parameters(): + param.grad = None + if mutator is not None: + mutator.reset() + x = torch.randn(input_size) + if use_cuda: + x = x.cuda() + y = torch.sum(model(x)) + if test_backward: + y.backward() + + def default_mutator_test_pipeline(self, mutator_cls): + for model_cls in self.default_cls: + for cuda_test in self.cuda_test: + _reset_global_mutable_counting() + model = model_cls(self) + mutator = mutator_cls(model) + if cuda_test: + model.cuda() + mutator.cuda() + if cuda_test > 1: + model = nn.DataParallel(model) + self.iterative_sample_and_forward(model, mutator, use_cuda=cuda_test) + _reset_global_mutable_counting() + model_fixed = model_cls(self) + if cuda_test: + model_fixed.cuda() + if cuda_test > 1: + model_fixed = nn.DataParallel(model_fixed) + with torch.no_grad(): + arc = mutator.export() + apply_fixed_architecture(model_fixed, arc) + self.iterative_sample_and_forward(model_fixed, n_iters=1, use_cuda=cuda_test) + + def test_random_mutator(self): + self.default_mutator_test_pipeline(RandomMutator) + + def test_enas_mutator(self): + self.default_mutator_test_pipeline(EnasMutator) + + def test_darts_mutator(self): + # DARTS doesn't support DataParallel. To be fixed. + self.cuda_test = [t for t in self.cuda_test if t <= 1] + self.default_mutator_test_pipeline(DartsMutator) + + def test_apply_twice(self): + model = self.model_module.NaiveSearchSpace(self) + with self.assertRaises(RuntimeError): + for _ in range(2): + RandomMutator(model) + + def test_nested_space(self): + model = self.model_module.NestedSpace(self) + with self.assertRaises(RuntimeError): + RandomMutator(model) + + def test_classic_nas(self): + for model_cls in self.default_cls: + model = model_cls(self) + get_and_apply_next_architecture(model) + self.iterative_sample_and_forward(model) + + +if __name__ == '__main__': + main() diff --git a/test/config_test.py b/test/config_test.py index 1db4bf086d..91136a8a95 100644 --- a/test/config_test.py +++ b/test/config_test.py @@ -29,6 +29,12 @@ def gen_new_config(config_file, training_service='local'): config['trial'].pop('command') if 'gpuNum' in config['trial']: config['trial'].pop('gpuNum') + + if training_service == 'frameworkcontroller': + it_config[training_service]['trial']['taskRoles'][0]['command'] = config['trial']['command'] + config['trial'].pop('command') + if 'gpuNum' in config['trial']: + config['trial'].pop('gpuNum') deep_update(config, it_config['all']) deep_update(config, it_config[training_service]) @@ -106,7 +112,7 @@ def run(args): parser = argparse.ArgumentParser() parser.add_argument("--config", type=str, default=None) parser.add_argument("--exclude", type=str, default=None) - parser.add_argument("--ts", type=str, choices=['local', 'remote', 'pai', 'kubeflow'], default='local') + parser.add_argument("--ts", type=str, choices=['local', 'remote', 'pai', 'kubeflow', 'frameworkcontroller'], default='local') parser.add_argument("--local_gpu", action='store_true') parser.add_argument("--preinstall", action='store_true') args = parser.parse_args() diff --git a/test/generate_ts_config.py b/test/generate_ts_config.py index 53de5d8d0d..fb5784d3b1 100644 --- a/test/generate_ts_config.py +++ b/test/generate_ts_config.py @@ -42,6 +42,21 @@ def update_training_service_config(args): config[args.ts]['kubeflowConfig']['azureStorage']['azureShare'] = args.azs_share if args.nni_docker_image is not None: config[args.ts]['trial']['worker']['image'] = args.nni_docker_image + elif args.ts == 'frameworkcontroller': + if args.nfs_server is not None: + config[args.ts]['frameworkcontrollerConfig']['nfs']['server'] = args.nfs_server + if args.nfs_path is not None: + config[args.ts]['frameworkcontrollerConfig']['nfs']['path'] = args.nfs_path + if args.keyvault_vaultname is not None: + config[args.ts]['frameworkcontrollerConfig']['keyVault']['vaultName'] = args.keyvault_vaultname + if args.keyvault_name is not None: + config[args.ts]['frameworkcontrollerConfig']['keyVault']['name'] = args.keyvault_name + if args.azs_account is not None: + config[args.ts]['frameworkcontrollerConfig']['azureStorage']['accountName'] = args.azs_account + if args.azs_share is not None: + config[args.ts]['frameworkcontrollerConfig']['azureStorage']['azureShare'] = args.azs_share + if args.nni_docker_image is not None: + config[args.ts]['trial']['taskRoles'][0]['image'] = args.nni_docker_image elif args.ts == 'remote': if args.remote_user is not None: config[args.ts]['machineList'][0]['username'] = args.remote_user @@ -69,7 +84,7 @@ def convert_command(): if __name__ == '__main__': parser = argparse.ArgumentParser() - parser.add_argument("--ts", type=str, choices=['pai', 'kubeflow', 'remote', 'local'], default='pai') + parser.add_argument("--ts", type=str, choices=['pai', 'kubeflow', 'remote', 'local', 'frameworkcontroller'], default='pai') parser.add_argument("--nni_docker_image", type=str) parser.add_argument("--nni_manager_ip", type=str) # args for PAI @@ -79,7 +94,7 @@ def convert_command(): parser.add_argument("--data_dir", type=str) parser.add_argument("--output_dir", type=str) parser.add_argument("--vc", type=str) - # args for kubeflow + # args for kubeflow and frameworkController parser.add_argument("--nfs_server", type=str) parser.add_argument("--nfs_path", type=str) parser.add_argument("--keyvault_vaultname", type=str) diff --git a/test/pipelines-it-frameworkcontroller.yml b/test/pipelines-it-frameworkcontroller.yml new file mode 100644 index 0000000000..e29fa3a8b1 --- /dev/null +++ b/test/pipelines-it-frameworkcontroller.yml @@ -0,0 +1,55 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +jobs: +- job: 'integration_test_frameworkController' + timeoutInMinutes: 0 + + steps: + - script: python3 -m pip install --upgrade pip setuptools --user + displayName: 'Install python tools' + + - script: | + cd deployment/pypi + echo 'building prerelease package...' + make build + ls $(Build.SourcesDirectory)/deployment/pypi/dist/ + condition: eq( variables['build_docker_img'], 'true' ) + displayName: 'build nni bdsit_wheel' + + - script: | + source install.sh + displayName: 'Install nni toolkit via source code' + + - script: | + sudo apt-get install swig -y + PATH=$HOME/.local/bin:$PATH nnictl package install --name=SMAC + PATH=$HOME/.local/bin:$PATH nnictl package install --name=BOHB + displayName: 'Install dependencies for integration tests in frameworkcontroller mode' + + - script: | + if [ $(build_docker_img) = 'true' ] + then + cd deployment/pypi + docker login -u $(docker_hub_user) -p $(docker_hub_pwd) + + echo 'updating docker file for installing nni from local...' + # update Dockerfile to install NNI in docker image from whl file built in last step + sed -ie 's/RUN python3 -m pip --no-cache-dir install nni/COPY .\/dist\/* .\nRUN python3 -m pip install nni-*.whl/' ../docker/Dockerfile + cat ../docker/Dockerfile + export IMG_TAG=`date -u +%y%m%d%H%M` + docker build -f ../docker/Dockerfile -t $(test_docker_img_name):$IMG_TAG . + docker push $(test_docker_img_name):$IMG_TAG + export TEST_IMG=$(test_docker_img_name):$IMG_TAG + cd ../../ + else + export TEST_IMG=$(existing_docker_img) + fi + echo "TEST_IMG:$TEST_IMG" + cd test + python3 generate_ts_config.py --ts frameworkcontroller --keyvault_vaultname $(keyVault_vaultName) --keyvault_name $(keyVault_name) \ + --azs_account $(azureStorage_accountName) --azs_share $(azureStorage_azureShare) --nni_docker_image $TEST_IMG --nni_manager_ip $(nni_manager_ip) + + cat training_service.yml + PATH=$HOME/.local/bin:$PATH python3 config_test.py --ts frameworkcontroller --exclude multi_phase + displayName: 'integration test' diff --git a/test/pipelines-it-local-windows.yml b/test/pipelines-it-local-windows.yml index 56a6e99bdc..688b9dcc94 100644 --- a/test/pipelines-it-local-windows.yml +++ b/test/pipelines-it-local-windows.yml @@ -8,7 +8,7 @@ jobs: - script: | python -m pip install scikit-learn==0.20.0 --user python -m pip install keras==2.1.6 --user - python -m pip install https://download.pytorch.org/whl/cu90/torch-0.4.1-cp36-cp36m-win_amd64.whl --user + python -m pip install torch===1.2.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html --user python -m pip install torchvision --user python -m pip install tensorflow-gpu==1.11.0 --user displayName: 'Install dependencies for integration tests' diff --git a/test/training_service.yml b/test/training_service.yml index 9fe8a85a0b..2a00acca54 100644 --- a/test/training_service.yml +++ b/test/training_service.yml @@ -24,6 +24,32 @@ kubeflow: image: trainingServicePlatform: kubeflow +frameworkcontroller: + maxExecDuration: 15m + nniManagerIp: + frameworkcontrollerConfig: + serviceAccountName: frameworkbarrier + storage: azureStorage + keyVault: + vaultName: + name: + azureStorage: + accountName: + azureShare: + trial: + taskRoles: + - name: worker + taskNum: 1 + command: + gpuNum: 1 + cpuNum: 1 + memoryMB: 8192 + image: + frameworkAttemptCompletionPolicy: + minFailedTaskCount: 1 + minSucceededTaskCount: 1 + trainingServicePlatform: frameworkcontroller + local: trainingServicePlatform: local pai: diff --git a/tools/nni_cmd/config_schema.py b/tools/nni_cmd/config_schema.py index 8017946ce9..4be463c7b6 100644 --- a/tools/nni_cmd/config_schema.py +++ b/tools/nni_cmd/config_schema.py @@ -271,16 +271,17 @@ def setPathCheck(key): pai_trial_schema = { 'trial':{ - 'command': setType('command', str), 'codeDir': setPathCheck('codeDir'), - 'gpuNum': setNumberRange('gpuNum', int, 0, 99999), - 'cpuNum': setNumberRange('cpuNum', int, 0, 99999), - 'memoryMB': setType('memoryMB', int), - 'image': setType('image', str), - Optional('virtualCluster'): setType('virtualCluster', str), 'nniManagerNFSMountPath': setPathCheck('nniManagerNFSMountPath'), 'containerNFSMountPath': setType('containerNFSMountPath', str), - 'paiStoragePlugin': setType('paiStoragePlugin', str) + 'command': setType('command', str), + Optional('gpuNum'): setNumberRange('gpuNum', int, 0, 99999), + Optional('cpuNum'): setNumberRange('cpuNum', int, 0, 99999), + Optional('memoryMB'): setType('memoryMB', int), + Optional('image'): setType('image', str), + Optional('virtualCluster'): setType('virtualCluster', str), + Optional('paiStoragePlugin'): setType('paiStoragePlugin', str), + Optional('paiConfigPath'): And(os.path.exists, error=SCHEMA_PATH_ERROR % 'paiConfigPath') } } @@ -407,15 +408,8 @@ def setPathCheck(key): } machine_list_schema = { - Optional('machineList'):[Or({ - 'ip': setType('ip', str), - Optional('port'): setNumberRange('port', int, 1, 65535), - 'username': setType('username', str), - 'passwd': setType('passwd', str), - Optional('gpuIndices'): Or(int, And(str, lambda x: len([int(i) for i in x.split(',')]) > 0), error='gpuIndex format error!'), - Optional('maxTrialNumPerGpu'): setType('maxTrialNumPerGpu', int), - Optional('useActiveGpu'): setType('useActiveGpu', bool) - }, { + Optional('machineList'):[Or( + { 'ip': setType('ip', str), Optional('port'): setNumberRange('port', int, 1, 65535), 'username': setType('username', str), @@ -424,6 +418,15 @@ def setPathCheck(key): Optional('gpuIndices'): Or(int, And(str, lambda x: len([int(i) for i in x.split(',')]) > 0), error='gpuIndex format error!'), Optional('maxTrialNumPerGpu'): setType('maxTrialNumPerGpu', int), Optional('useActiveGpu'): setType('useActiveGpu', bool) + }, + { + 'ip': setType('ip', str), + Optional('port'): setNumberRange('port', int, 1, 65535), + 'username': setType('username', str), + 'passwd': setType('passwd', str), + Optional('gpuIndices'): Or(int, And(str, lambda x: len([int(i) for i in x.split(',')]) > 0), error='gpuIndex format error!'), + Optional('maxTrialNumPerGpu'): setType('maxTrialNumPerGpu', int), + Optional('useActiveGpu'): setType('useActiveGpu', bool) })] } diff --git a/tools/nni_cmd/launcher.py b/tools/nni_cmd/launcher.py index 5d406a0ae3..26332c37b0 100644 --- a/tools/nni_cmd/launcher.py +++ b/tools/nni_cmd/launcher.py @@ -9,7 +9,7 @@ import site import time import tempfile -from subprocess import Popen, check_call, CalledProcessError +from subprocess import Popen, check_call, CalledProcessError, PIPE, STDOUT from nni_annotation import expand_annotations, generate_search_space from nni.constants import ModuleName, AdvisorModuleName from .launcher_utils import validate_all_content @@ -20,7 +20,7 @@ detect_port, get_user, get_python_dir from .constants import NNICTL_HOME_DIR, ERROR_INFO, REST_TIME_OUT, EXPERIMENT_SUCCESS_INFO, LOG_HEADER, PACKAGE_REQUIREMENTS from .command_utils import check_output_command, kill_command -from .nnictl_utils import update_experiment, set_monitor +from .nnictl_utils import update_experiment def get_log_path(config_file_name): '''generate stdout and stderr log path''' @@ -78,17 +78,17 @@ def _generate_installation_path(sitepackages_path): print_error('Fail to find nni under python library') exit(1) -def start_rest_server(port, platform, mode, config_file_name, experiment_id=None, log_dir=None, log_level=None): +def start_rest_server(args, platform, mode, config_file_name, experiment_id=None, log_dir=None, log_level=None): '''Run nni manager process''' - if detect_port(port): + if detect_port(args.port): print_error('Port %s is used by another process, please reset the port!\n' \ - 'You could use \'nnictl create --help\' to get help information' % port) + 'You could use \'nnictl create --help\' to get help information' % args.port) exit(1) - if (platform != 'local') and detect_port(int(port) + 1): + if (platform != 'local') and detect_port(int(args.port) + 1): print_error('PAI mode need an additional adjacent port %d, and the port %d is used by another process!\n' \ 'You could set another port to start experiment!\n' \ - 'You could use \'nnictl create --help\' to get help information' % ((int(port) + 1), (int(port) + 1))) + 'You could use \'nnictl create --help\' to get help information' % ((int(args.port) + 1), (int(args.port) + 1))) exit(1) print_normal('Starting restful server...') @@ -99,7 +99,7 @@ def start_rest_server(port, platform, mode, config_file_name, experiment_id=None node_command = 'node' if sys.platform == 'win32': node_command = os.path.join(entry_dir[:-3], 'Scripts', 'node.exe') - cmds = [node_command, entry_file, '--port', str(port), '--mode', platform] + cmds = [node_command, entry_file, '--port', str(args.port), '--mode', platform] if mode == 'view': cmds += ['--start_mode', 'resume'] cmds += ['--readonly', 'true'] @@ -111,6 +111,8 @@ def start_rest_server(port, platform, mode, config_file_name, experiment_id=None cmds += ['--log_level', log_level] if mode in ['resume', 'view']: cmds += ['--experiment_id', experiment_id] + if args.foreground: + cmds += ['--foreground', 'true'] stdout_full_path, stderr_full_path = get_log_path(config_file_name) with open(stdout_full_path, 'a+') as stdout_file, open(stderr_full_path, 'a+') as stderr_file: time_now = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time())) @@ -120,9 +122,15 @@ def start_rest_server(port, platform, mode, config_file_name, experiment_id=None stderr_file.write(log_header) if sys.platform == 'win32': from subprocess import CREATE_NEW_PROCESS_GROUP - process = Popen(cmds, cwd=entry_dir, stdout=stdout_file, stderr=stderr_file, creationflags=CREATE_NEW_PROCESS_GROUP) + if args.foreground: + process = Popen(cmds, cwd=entry_dir, stdout=PIPE, stderr=STDOUT, creationflags=CREATE_NEW_PROCESS_GROUP) + else: + process = Popen(cmds, cwd=entry_dir, stdout=stdout_file, stderr=stderr_file, creationflags=CREATE_NEW_PROCESS_GROUP) else: - process = Popen(cmds, cwd=entry_dir, stdout=stdout_file, stderr=stderr_file) + if args.foreground: + process = Popen(cmds, cwd=entry_dir, stdout=PIPE, stderr=PIPE) + else: + process = Popen(cmds, cwd=entry_dir, stdout=stdout_file, stderr=stderr_file) return process, str(time_now) def set_trial_config(experiment_config, port, config_file_name): @@ -424,7 +432,7 @@ def launch_experiment(args, experiment_config, mode, config_file_name, experimen if log_level not in ['trace', 'debug'] and (args.debug or experiment_config.get('debug') is True): log_level = 'debug' # start rest server - rest_process, start_time = start_rest_server(args.port, experiment_config['trainingServicePlatform'], \ + rest_process, start_time = start_rest_server(args, experiment_config['trainingServicePlatform'], \ mode, config_file_name, experiment_id, log_dir, log_level) nni_config.set_config('restServerPid', rest_process.pid) # Deal with annotation @@ -493,8 +501,14 @@ def launch_experiment(args, experiment_config, mode, config_file_name, experimen experiment_config['experimentName']) print_normal(EXPERIMENT_SUCCESS_INFO % (experiment_id, ' '.join(web_ui_url_list))) - if args.watch: - set_monitor(True, 3, args.port, rest_process.pid) + if args.foreground: + try: + while True: + log_content = rest_process.stdout.readline().strip().decode('utf-8') + print(log_content) + except KeyboardInterrupt: + kill_command(rest_process.pid) + print_normal('Stopping experiment...') def create_experiment(args): '''start a new experiment''' diff --git a/tools/nni_cmd/launcher_utils.py b/tools/nni_cmd/launcher_utils.py index f2d2c1d22f..9301e5bf2b 100644 --- a/tools/nni_cmd/launcher_utils.py +++ b/tools/nni_cmd/launcher_utils.py @@ -7,7 +7,7 @@ from schema import Schema from .config_schema import LOCAL_CONFIG_SCHEMA, REMOTE_CONFIG_SCHEMA, PAI_CONFIG_SCHEMA, PAI_YARN_CONFIG_SCHEMA, KUBEFLOW_CONFIG_SCHEMA,\ FRAMEWORKCONTROLLER_CONFIG_SCHEMA, tuner_schema_dict, advisor_schema_dict, assessor_schema_dict -from .common_utils import print_error, print_warning, print_normal +from .common_utils import print_error, print_warning, print_normal, get_yml_content def expand_path(experiment_config, key): '''Change '~' to user home directory''' @@ -63,6 +63,8 @@ def parse_path(experiment_config, config_path): if experiment_config.get('machineList'): for index in range(len(experiment_config['machineList'])): expand_path(experiment_config['machineList'][index], 'sshKeyPath') + if experiment_config['trial'].get('paiConfigPath'): + expand_path(experiment_config['trial'], 'paiConfigPath') #if users use relative path, convert it to absolute path root_path = os.path.dirname(config_path) @@ -94,6 +96,8 @@ def parse_path(experiment_config, config_path): if experiment_config.get('machineList'): for index in range(len(experiment_config['machineList'])): parse_relative_path(root_path, experiment_config['machineList'][index], 'sshKeyPath') + if experiment_config['trial'].get('paiConfigPath'): + parse_relative_path(root_path, experiment_config['trial'], 'paiConfigPath') def validate_search_space_content(experiment_config): '''Validate searchspace content, @@ -254,6 +258,45 @@ def validate_machine_list(experiment_config): print_error('Please set machineList!') exit(1) +def validate_pai_config_path(experiment_config): + '''validate paiConfigPath field''' + if experiment_config.get('trainingServicePlatform') == 'pai': + if experiment_config.get('trial', {}).get('paiConfigPath'): + # validate the file format of paiConfigPath, ensure it is yaml format + pai_config = get_yml_content(experiment_config['trial']['paiConfigPath']) + if experiment_config['trial'].get('image') is None: + if pai_config.get('prerequisites', [{}])[0].get('uri') is None: + print_error('Please set image field, or set image uri in your own paiConfig!') + exit(1) + experiment_config['trial']['image'] = pai_config['prerequisites'][0]['uri'] + if experiment_config['trial'].get('gpuNum') is None: + if pai_config.get('taskRoles', {}).get('taskrole', {}).get('resourcePerInstance', {}).get('gpu') is None: + print_error('Please set gpuNum field, or set resourcePerInstance gpu in your own paiConfig!') + exit(1) + experiment_config['trial']['gpuNum'] = pai_config['taskRoles']['taskrole']['resourcePerInstance']['gpu'] + if experiment_config['trial'].get('cpuNum') is None: + if pai_config.get('taskRoles', {}).get('taskrole', {}).get('resourcePerInstance', {}).get('cpu') is None: + print_error('Please set cpuNum field, or set resourcePerInstance cpu in your own paiConfig!') + exit(1) + experiment_config['trial']['cpuNum'] = pai_config['taskRoles']['taskrole']['resourcePerInstance']['cpu'] + if experiment_config['trial'].get('memoryMB') is None: + if pai_config.get('taskRoles', {}).get('taskrole', {}).get('resourcePerInstance', {}).get('memoryMB', {}) is None: + print_error('Please set memoryMB field, or set resourcePerInstance memoryMB in your own paiConfig!') + exit(1) + experiment_config['trial']['memoryMB'] = pai_config['taskRoles']['taskrole']['resourcePerInstance']['memoryMB'] + if experiment_config['trial'].get('paiStoragePlugin') is None: + if pai_config.get('extras', {}).get('com.microsoft.pai.runtimeplugin', [{}])[0].get('plugin') is None: + print_error('Please set paiStoragePlugin field, or set plugin in your own paiConfig!') + exit(1) + experiment_config['trial']['paiStoragePlugin'] = pai_config['extras']['com.microsoft.pai.runtimeplugin'][0]['plugin'] + else: + pai_trial_fields_required_list = ['image', 'gpuNum', 'cpuNum', 'memoryMB', 'paiStoragePlugin'] + for trial_field in pai_trial_fields_required_list: + if experiment_config['trial'].get(trial_field) is None: + print_error('Please set {0} in trial configuration,\ + or set additional pai configuration file path in paiConfigPath!'.format(trial_field)) + exit(1) + def validate_pai_trial_conifg(experiment_config): '''validate the trial config in pai platform''' if experiment_config.get('trainingServicePlatform') in ['pai', 'paiYarn']: @@ -269,6 +312,7 @@ def validate_pai_trial_conifg(experiment_config): print_warning(warning_information.format('dataDir')) if experiment_config.get('trial').get('outputDir'): print_warning(warning_information.format('outputDir')) + validate_pai_config_path(experiment_config) def validate_all_content(experiment_config, config_path): '''Validate whether experiment_config is valid''' diff --git a/tools/nni_cmd/nnictl.py b/tools/nni_cmd/nnictl.py index 856bd2adc8..d9da570abb 100644 --- a/tools/nni_cmd/nnictl.py +++ b/tools/nni_cmd/nnictl.py @@ -51,7 +51,7 @@ def parse_args(): parser_start.add_argument('--config', '-c', required=True, dest='config', help='the path of yaml config file') parser_start.add_argument('--port', '-p', default=DEFAULT_REST_PORT, dest='port', help='the port of restful server') parser_start.add_argument('--debug', '-d', action='store_true', help=' set debug mode') - parser_start.add_argument('--watch', '-w', action='store_true', help=' set watch mode') + parser_start.add_argument('--foreground', '-f', action='store_true', help=' set foreground mode, print log content to terminal') parser_start.set_defaults(func=create_experiment) # parse resume command @@ -59,7 +59,7 @@ def parse_args(): parser_resume.add_argument('id', nargs='?', help='The id of the experiment you want to resume') parser_resume.add_argument('--port', '-p', default=DEFAULT_REST_PORT, dest='port', help='the port of restful server') parser_resume.add_argument('--debug', '-d', action='store_true', help=' set debug mode') - parser_resume.add_argument('--watch', '-w', action='store_true', help=' set watch mode') + parser_resume.add_argument('--foreground', '-f', action='store_true', help=' set foreground mode, print log content to terminal') parser_resume.set_defaults(func=resume_experiment) # parse view command diff --git a/tools/nni_cmd/nnictl_utils.py b/tools/nni_cmd/nnictl_utils.py index a66197fac9..4866bcdce4 100644 --- a/tools/nni_cmd/nnictl_utils.py +++ b/tools/nni_cmd/nnictl_utils.py @@ -403,11 +403,13 @@ def remote_clean(machine_list, experiment_id=None): userName = machine.get('username') host = machine.get('ip') port = machine.get('port') + sshKeyPath = machine.get('sshKeyPath') + passphrase = machine.get('passphrase') if experiment_id: remote_dir = '/' + '/'.join(['tmp', 'nni', 'experiments', experiment_id]) else: remote_dir = '/' + '/'.join(['tmp', 'nni', 'experiments']) - sftp = create_ssh_sftp_client(host, port, userName, passwd) + sftp = create_ssh_sftp_client(host, port, userName, passwd, sshKeyPath, passphrase) print_normal('removing folder {0}'.format(host + ':' + str(port) + remote_dir)) remove_remote_directory(sftp, remote_dir) diff --git a/tools/nni_cmd/ssh_utils.py b/tools/nni_cmd/ssh_utils.py index 2e68611206..e3f26a8e24 100644 --- a/tools/nni_cmd/ssh_utils.py +++ b/tools/nni_cmd/ssh_utils.py @@ -30,12 +30,16 @@ def copy_remote_directory_to_local(sftp, remote_path, local_path): except Exception: pass -def create_ssh_sftp_client(host_ip, port, username, password): +def create_ssh_sftp_client(host_ip, port, username, password, ssh_key_path, passphrase): '''create ssh client''' try: paramiko = check_environment() conn = paramiko.Transport(host_ip, port) - conn.connect(username=username, password=password) + if ssh_key_path is not None: + ssh_key = paramiko.RSAKey.from_private_key_file(ssh_key_path, password=passphrase) + conn.connect(username=username, pkey=ssh_key) + else: + conn.connect(username=username, password=password) sftp = paramiko.SFTPClient.from_transport(conn) return sftp except Exception as exception: diff --git a/tools/nni_cmd/tensorboard_utils.py b/tools/nni_cmd/tensorboard_utils.py index 8cb0bbfc17..60d589083a 100644 --- a/tools/nni_cmd/tensorboard_utils.py +++ b/tools/nni_cmd/tensorboard_utils.py @@ -37,12 +37,14 @@ def copy_data_from_remote(args, nni_config, trial_content, path_list, host_list, machine_dict = {} local_path_list = [] for machine in machine_list: - machine_dict[machine['ip']] = {'port': machine['port'], 'passwd': machine['passwd'], 'username': machine['username']} + machine_dict[machine['ip']] = {'port': machine['port'], 'passwd': machine['passwd'], 'username': machine['username'], + 'sshKeyPath': machine.get('sshKeyPath'), 'passphrase': machine.get('passphrase')} for index, host in enumerate(host_list): local_path = os.path.join(temp_nni_path, trial_content[index].get('id')) local_path_list.append(local_path) print_normal('Copying log data from %s to %s' % (host + ':' + path_list[index], local_path)) - sftp = create_ssh_sftp_client(host, machine_dict[host]['port'], machine_dict[host]['username'], machine_dict[host]['passwd']) + sftp = create_ssh_sftp_client(host, machine_dict[host]['port'], machine_dict[host]['username'], machine_dict[host]['passwd'], + machine_dict[host]['sshKeyPath'], machine_dict[host]['passphrase']) copy_remote_directory_to_local(sftp, path_list[index], local_path) print_normal('Copy done!') return local_path_list From 889218bbd7c9a24f719ce63ec05799139e3e1e35 Mon Sep 17 00:00:00 2001 From: quzha Date: Mon, 10 Feb 2020 09:13:07 +0800 Subject: [PATCH 5/9] remove Installation.md --- docs/en_US/Tutorial/Installation.md | 164 ---------------------------- 1 file changed, 164 deletions(-) delete mode 100644 docs/en_US/Tutorial/Installation.md diff --git a/docs/en_US/Tutorial/Installation.md b/docs/en_US/Tutorial/Installation.md deleted file mode 100644 index f324366bd8..0000000000 --- a/docs/en_US/Tutorial/Installation.md +++ /dev/null @@ -1,164 +0,0 @@ -# Installation of NNI - -Currently we support installation on Linux, macOS and Windows. - -## Install on Linux or macOS - -* Install NNI through pip - - Prerequisite: `python 64-bit >= 3.5` - - ```bash - python3 -m pip install --upgrade nni - ``` - -* Install NNI through source code - - If you are interested on special or latest code version, you can install NNI through source code. - - Prerequisites: `python 64-bit >=3.5`, `git`, `wget` - - ```bash - git clone -b v0.8 https://github.com/Microsoft/nni.git - cd nni - ./install.sh - ``` - -* Use NNI in a docker image - - You can also install NNI in a docker image. Please follow the instructions [here](https://github.com/Microsoft/nni/tree/master/deployment/docker/README.md) to build NNI docker image. The NNI docker image can also be retrieved from Docker Hub through the command `docker pull msranni/nni:latest`. - -## Install on Windows - - Anaconda or Miniconda is highly recommended to manage multiple Python environments. - -* Install NNI through pip - - Prerequisites: `python 64-bit >= 3.5` - - ```bash - python -m pip install --upgrade nni - ``` - -* Install NNI through source code - - If you are interested on special or latest code version, you can install NNI through source code. - - Prerequisites: `python 64-bit >=3.5`, `git`, `PowerShell`. - - ```bash - git clone -b v0.8 https://github.com/Microsoft/nni.git - cd nni - powershell -ExecutionPolicy Bypass -file install.ps1 - ``` - -## Verify installation - -The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is used** when running it. - -* Download the examples via clone the source code. - - ```bash - git clone -b v1.3 https://github.com/Microsoft/nni.git - ``` - -* Run the MNIST example. - - Linux or macOS - - ```bash - nnictl create --config nni/examples/trials/mnist-tfv1/config.yml - ``` - - Windows - - ```bash - nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml - ``` - -* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`. - -```text -INFO: Starting restful server... -INFO: Successfully started Restful server! -INFO: Setting local config... -INFO: Successfully set local config! -INFO: Starting experiment... -INFO: Successfully started experiment! ------------------------------------------------------------------------ -The experiment id is egchD4qy -The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 ------------------------------------------------------------------------ - -You can use these commands to get more information about the experiment ------------------------------------------------------------------------ - commands description -1. nnictl experiment show show the information of experiments -2. nnictl trial ls list all of trial jobs -3. nnictl top monitor the status of running experiments -4. nnictl log stderr show stderr log content -5. nnictl log stdout show stdout log content -6. nnictl stop stop an experiment -7. nnictl trial kill kill a trial job by id -8. nnictl --help get help information about nnictl ------------------------------------------------------------------------ -``` - -* Open the `Web UI url` in your browser, you can view detail information of the experiment and all the submitted trial jobs as shown below. [Here](../Tutorial/WebUI.md) are more Web UI pages. - -![overview](../../img/webui_overview_page.png) - -![detail](../../img/webui_trialdetail_page.png) - -## System requirements - -Due to potential programming changes, the minimum system requirements of NNI may change over time. - -### Linux - -| | Recommended | Minimum | -| -------------------- | ---------------------------------------------- | -------------------------------------- | -| **Operating System** | Ubuntu 16.04 or above | -| **CPU** | Intel® Core™ i5 or AMD Phenom™ II X3 or better | Intel® Core™ i3 or AMD Phenom™ X3 8650 | -| **GPU** | NVIDIA® GeForce® GTX 660 or better | NVIDIA® GeForce® GTX 460 | -| **Memory** | 6 GB RAM | 4 GB RAM | -| **Storage** | 30 GB available hare drive space | -| **Internet** | Boardband internet connection | -| **Resolution** | 1024 x 768 minimum display resolution | - -### macOS - -| | Recommended | Minimum | -| -------------------- | ------------------------------------- | --------------------------------------------------------- | -| **Operating System** | macOS 10.14.1 or above | -| **CPU** | Intel® Core™ i7-4770 or better | Intel® Core™ i5-760 or better | -| **GPU** | AMD Radeon™ R9 M395X or better | NVIDIA® GeForce® GT 750M or AMD Radeon™ R9 M290 or better | -| **Memory** | 8 GB RAM | 4 GB RAM | -| **Storage** | 70GB available space SSD | 70GB available space 7200 RPM HDD | -| **Internet** | Boardband internet connection | -| **Resolution** | 1024 x 768 minimum display resolution | - -### Windows - -| | Recommended | Minimum | -| -------------------- | ---------------------------------------------- | -------------------------------------- | -| **Operating System** | Windows 10 1809 or above | -| **CPU** | Intel® Core™ i5 or AMD Phenom™ II X3 or better | Intel® Core™ i3 or AMD Phenom™ X3 8650 | -| **GPU** | NVIDIA® GeForce® GTX 660 or better | NVIDIA® GeForce® GTX 460 | -| **Memory** | 6 GB RAM | 4 GB RAM | -| **Storage** | 30 GB available hare drive space | -| **Internet** | Boardband internet connection | -| **Resolution** | 1024 x 768 minimum display resolution | - -## Further reading - -* [Overview](../Overview.md) -* [Use command line tool nnictl](Nnictl.md) -* [Use NNIBoard](WebUI.md) -* [Define search space](SearchSpaceSpec.md) -* [Config an experiment](ExperimentConfig.md) -* [How to run an experiment on local (with multiple GPUs)?](../TrainingService/LocalMode.md) -* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md) -* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md) -* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md) From eab0da154b4c8cf68f95ef294844649c9e17ee60 Mon Sep 17 00:00:00 2001 From: QuanluZhang Date: Mon, 10 Feb 2020 17:42:46 +0800 Subject: [PATCH 6/9] Dev compression speedup (#1999) --- examples/model_compress/fpgm_torch_mnist.py | 2 +- examples/model_compress/model_speedup.py | 153 ++++++ examples/model_compress/speedup.md | 105 ++++ .../nni/compression/speedup/torch/__init__.py | 1 + .../speedup/torch/compress_modules.py | 133 +++++ .../compression/speedup/torch/compressor.py | 493 ++++++++++++++++++ .../compression/speedup/torch/infer_shape.py | 481 +++++++++++++++++ .../pynni/nni/compression/torch/__init__.py | 1 + .../compression/torch/apply_compression.py | 70 +++ 9 files changed, 1438 insertions(+), 1 deletion(-) create mode 100644 examples/model_compress/model_speedup.py create mode 100644 examples/model_compress/speedup.md create mode 100644 src/sdk/pynni/nni/compression/speedup/torch/__init__.py create mode 100644 src/sdk/pynni/nni/compression/speedup/torch/compress_modules.py create mode 100644 src/sdk/pynni/nni/compression/speedup/torch/compressor.py create mode 100644 src/sdk/pynni/nni/compression/speedup/torch/infer_shape.py create mode 100644 src/sdk/pynni/nni/compression/torch/apply_compression.py diff --git a/examples/model_compress/fpgm_torch_mnist.py b/examples/model_compress/fpgm_torch_mnist.py index db141b37d9..059b451298 100644 --- a/examples/model_compress/fpgm_torch_mnist.py +++ b/examples/model_compress/fpgm_torch_mnist.py @@ -17,7 +17,7 @@ def forward(self, x): x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) - x = x.view(-1, 4 * 4 * 50) + x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) diff --git a/examples/model_compress/model_speedup.py b/examples/model_compress/model_speedup.py new file mode 100644 index 0000000000..9d27d98da9 --- /dev/null +++ b/examples/model_compress/model_speedup.py @@ -0,0 +1,153 @@ +import argparse +import time +import torch +import torch.nn as nn +import torch.nn.functional as F +from torchvision import datasets, transforms +from models.cifar10.vgg import VGG +from nni.compression.speedup.torch import ModelSpeedup +from nni.compression.torch import apply_compression_results + +torch.manual_seed(0) +use_mask = False + +def apoz_speedup(masks_file, model_checkpoint): + device = torch.device('cuda') + model = VGG(depth=16) + model.to(device) + model.eval() + + dummy_input = torch.randn(64, 3, 32, 32) + if use_mask: + apply_compression_results(model, masks_file) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(32): + out = model(dummy_input) + #print(out.size(), out) + print('mask elapsed time: ', time.time() - start) + return + else: + #print("model before: ", model) + m_speedup = ModelSpeedup(model, dummy_input.to(device), masks_file) + m_speedup.speedup_model() + #print("model after: ", model) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(32): + out = model(dummy_input) + #print(out.size(), out) + print('speedup elapsed time: ', time.time() - start) + return + +def l1filter_speedup(masks_file, model_checkpoint): + device = torch.device('cuda') + model = VGG(depth=16) + model.to(device) + model.eval() + + dummy_input = torch.randn(64, 3, 32, 32) + if use_mask: + apply_compression_results(model, masks_file) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(32): + out = model(dummy_input) + #print(out.size(), out) + print('mask elapsed time: ', time.time() - start) + return + else: + #print("model before: ", model) + m_speedup = ModelSpeedup(model, dummy_input.to(device), masks_file) + m_speedup.speedup_model() + #print("model after: ", model) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(32): + out = model(dummy_input) + #print(out.size(), out) + print('speedup elapsed time: ', time.time() - start) + return + +def fpgm_speedup(masks_file, model_checkpoint): + from fpgm_torch_mnist import Mnist + device = torch.device('cpu') + model = Mnist() + model.to(device) + model.print_conv_filter_sparsity() + + dummy_input = torch.randn(64, 1, 28, 28) + if use_mask: + apply_compression_results(model, masks_file) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(40): + out = model(dummy_input) + print('mask elapsed time: ', time.time() - start) + #print(out.size(), out) + return + else: + m_speedup = ModelSpeedup(model, dummy_input.to(device), masks_file) + m_speedup.speedup_model() + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(40): + out = model(dummy_input) + print('speedup elapsed time: ', time.time() - start) + #print(out.size(), out) + return + +def slim_speedup(masks_file, model_checkpoint): + device = torch.device('cuda') + model = VGG(depth=19) + model.to(device) + model.eval() + + dummy_input = torch.randn(64, 3, 32, 32) + if use_mask: + apply_compression_results(model, masks_file) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(32): + out = model(dummy_input) + #print(out.size(), out) + print('mask elapsed time: ', time.time() - start) + return + else: + #print("model before: ", model) + m_speedup = ModelSpeedup(model, dummy_input.to(device), masks_file) + m_speedup.speedup_model() + #print("model after: ", model) + dummy_input = dummy_input.to(device) + start = time.time() + for _ in range(32): + out = model(dummy_input) + #print(out.size(), out) + print('speedup elapsed time: ', time.time() - start) + return + +if __name__ == '__main__': + parser = argparse.ArgumentParser("speedup") + parser.add_argument("--example_name", type=str, default="slim", help="the name of pruning example") + parser.add_argument("--masks_file", type=str, default=None, help="the path of the masks file") + parser.add_argument("--model_checkpoint", type=str, default=None, help="the path of checkpointed model") + args = parser.parse_args() + + if args.example_name == 'slim': + if args.masks_file is None: + args.masks_file = 'mask_vgg19_cifar10.pth' + slim_speedup(args.masks_file, args.model_checkpoint) + elif args.example_name == 'fpgm': + if args.masks_file is None: + args.masks_file = 'mask.pth' + fpgm_speedup(args.masks_file, args.model_checkpoint) + elif args.example_name == 'l1filter': + if args.masks_file is None: + args.masks_file = 'mask_vgg16_cifar10.pth' + l1filter_speedup(args.masks_file, args.model_checkpoint) + elif args.example_name == 'apoz': + if args.masks_file is None: + args.masks_file = 'mask_vgg16_cifar10.pth' + apoz_speedup(args.masks_file, args.model_checkpoint) + else: + raise ValueError('unsupported example_name: {}'.format(args.example_name)) diff --git a/examples/model_compress/speedup.md b/examples/model_compress/speedup.md new file mode 100644 index 0000000000..06f21688c5 --- /dev/null +++ b/examples/model_compress/speedup.md @@ -0,0 +1,105 @@ +# Speed up Masked Model + +*This feature is still in Alpha version.* + +## Introduction + +Pruning algorithms usually use weight masks to simulate the real pruning. Masks can be used +to check model performance of a specific pruning (or sparsity), but there is no real speedup. +Since model speedup is the ultimate goal of model pruning, we try to provide a tool to users +to convert a model to a smaller one based on user provided masks (the masks come from the +pruning algorithms). + +There are two types of pruning. One is fine-grained pruning, it does not change the shape of weights, and input/output tensors. Sparse kernel is required to speed up a fine-grained pruned layer. The other is coarse-grained pruning (e.g., channels), shape of weights and input/output tensors usually change due to such pruning. To speed up this kind of pruning, there is no need to use sparse kernel, just replace the pruned layer with smaller one. Since the support of sparse kernels in community is limited, we only support the speedup of coarse-grained pruning and leave the support of fine-grained pruning in future. + +## Design and Implementation + +To speed up a model, the pruned layers should be replaced, either replaced with smaller layer for coarse-grained mask, or replaced with sparse kernel for fine-grained mask. Coarse-grained mask usually changes the shape of weights or input/output tensors, thus, we should do shape inference to check are there other unpruned layers should be replaced as well due to shape change. Therefore, in our design, there are two main steps: first, do shape inference to find out all the modules that should be replaced; second, replace the modules. The first step requires topology (i.e., connections) of the model, we use `jit.trace` to obtain the model grpah for PyTorch. + +For each module, we should prepare four functions, three for shape inference and one for module replacement. The three shape inference functions are: given weight shape infer input/output shape, given input shape infer weight/output shape, given output shape infer weight/input shape. The module replacement function returns a newly created module which is smaller. + +## Usage + +```python +from nni.compression.speedup.torch import ModelSpeedup +# model: the model you want to speed up +# dummy_input: dummy input of the model, given to `jit.trace` +# masks_file: the mask file created by pruning algorithms +m_speedup = ModelSpeedup(model, dummy_input.to(device), masks_file) +m_speedup.speedup_model() +dummy_input = dummy_input.to(device) +start = time.time() +out = model(dummy_input) +print('elapsed time: ', time.time() - start) +``` +For complete examples please refer to [the code](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py) + +NOTE: The current implementation only works on torch 1.3.1 and torchvision 0.4.2 + +## Limitations + +Since every module requires four functions for shape inference and module replacement, this is a large amount of work, we only implemented the ones that are required by the examples. If you want to speed up your own model which cannot supported by the current implementation, you are welcome to contribute. + +For PyTorch we can only replace modules, if functions in `forward` should be replaced, our current implementation does not work. One workaround is make the function a PyTorch module. + +## Speedup Results of Examples + +The code of these experiments can be found [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py). + +### slim pruner example + +on one V100 GPU, +input tensor: `torch.randn(64, 3, 32, 32)` + +|Times| Mask Latency| Speedup Latency | +|---|---|---| +| 1 | 0.01197 | 0.005107 | +| 2 | 0.02019 | 0.008769 | +| 4 | 0.02733 | 0.014809 | +| 8 | 0.04310 | 0.027441 | +| 16 | 0.07731 | 0.05008 | +| 32 | 0.14464 | 0.10027 | + +### fpgm pruner example + +on cpu, +input tensor: `torch.randn(64, 1, 28, 28)`, +too large variance + +|Times| Mask Latency| Speedup Latency | +|---|---|---| +| 1 | 0.01383 | 0.01839 | +| 2 | 0.01167 | 0.003558 | +| 4 | 0.01636 | 0.01088 | +| 40 | 0.14412 | 0.08268 | +| 40 | 1.29385 | 0.14408 | +| 40 | 0.41035 | 0.46162 | +| 400 | 6.29020 | 5.82143 | + +### l1filter pruner example + +on one V100 GPU, +input tensor: `torch.randn(64, 3, 32, 32)` + +|Times| Mask Latency| Speedup Latency | +|---|---|---| +| 1 | 0.01026 | 0.003677 | +| 2 | 0.01657 | 0.008161 | +| 4 | 0.02458 | 0.020018 | +| 8 | 0.03498 | 0.025504 | +| 16 | 0.06757 | 0.047523 | +| 32 | 0.10487 | 0.086442 | + +### APoZ pruner example + +on one V100 GPU, +input tensor: `torch.randn(64, 3, 32, 32)` + +|Times| Mask Latency| Speedup Latency | +|---|---|---| +| 1 | 0.01389 | 0.004208 | +| 2 | 0.01628 | 0.008310 | +| 4 | 0.02521 | 0.014008 | +| 8 | 0.03386 | 0.023923 | +| 16 | 0.06042 | 0.046183 | +| 32 | 0.12421 | 0.087113 | \ No newline at end of file diff --git a/src/sdk/pynni/nni/compression/speedup/torch/__init__.py b/src/sdk/pynni/nni/compression/speedup/torch/__init__.py new file mode 100644 index 0000000000..cef8ebd76c --- /dev/null +++ b/src/sdk/pynni/nni/compression/speedup/torch/__init__.py @@ -0,0 +1 @@ +from .compressor import ModelSpeedup \ No newline at end of file diff --git a/src/sdk/pynni/nni/compression/speedup/torch/compress_modules.py b/src/sdk/pynni/nni/compression/speedup/torch/compress_modules.py new file mode 100644 index 0000000000..540fe115cf --- /dev/null +++ b/src/sdk/pynni/nni/compression/speedup/torch/compress_modules.py @@ -0,0 +1,133 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import torch +from .infer_shape import CoarseMask, ModuleMasks + +replace_module = { + 'BatchNorm2d': lambda module, mask: replace_batchnorm2d(module, mask), + 'Conv2d': lambda module, mask: replace_conv2d(module, mask), + 'MaxPool2d': lambda module, mask: no_replace(module, mask), + 'ReLU': lambda module, mask: no_replace(module, mask), + 'Linear': lambda module, mask: replace_linear(module, mask) +} + +def no_replace(module, mask): + """ + No need to replace + """ + return module + +def replace_linear(linear, mask): + """ + Parameters + ---------- + linear : torch.nn.Linear + The linear module to be replace + mask : ModuleMasks + The masks of this module + + Returns + ------- + torch.nn.Linear + The new linear module + """ + assert isinstance(mask, ModuleMasks) + assert mask.input_mask is not None + assert mask.output_mask is None + assert not mask.param_masks + index = mask.input_mask.mask_index[-1] + print(mask.input_mask.mask_index) + in_features = index.size()[0] + print('linear: ', in_features) + new_linear = torch.nn.Linear(in_features=in_features, + out_features=linear.out_features, + bias=linear.bias is not None) + new_linear.to(linear.weight.device) + new_linear.weight.data = torch.index_select(linear.weight.data, -1, index.to(linear.weight.device)) + if linear.bias is not None: + new_linear.bias.data.copy_(linear.bias.data) + return new_linear + +def replace_batchnorm2d(norm, mask): + """ + Parameters + ---------- + norm : torch.nn.BatchNorm2d + The batchnorm module to be replace + mask : ModuleMasks + The masks of this module + + Returns + ------- + torch.nn.BatchNorm2d + The new batchnorm module + """ + assert isinstance(mask, ModuleMasks) + assert 'weight' in mask.param_masks and 'bias' in mask.param_masks + index = mask.param_masks['weight'].mask_index[0] + num_features = index.size()[0] + print("replace batchnorm2d: ", num_features, index) + new_norm = torch.nn.BatchNorm2d(num_features=num_features, + eps=norm.eps, + momentum=norm.momentum, + affine=norm.affine, + track_running_stats=norm.track_running_stats) + # assign weights + new_norm.weight.data = torch.index_select(norm.weight.data, 0, index) + new_norm.bias.data = torch.index_select(norm.bias.data, 0, index) + if norm.track_running_stats: + new_norm.running_mean.data = torch.index_select(norm.running_mean.data, 0, index) + new_norm.running_var.data = torch.index_select(norm.running_var.data, 0, index) + return new_norm + +def replace_conv2d(conv, mask): + """ + Parameters + ---------- + conv : torch.nn.Conv2d + The conv2d module to be replaced + mask : ModuleMasks + The masks of this module + + Returns + ------- + torch.nn.Conv2d + The new conv2d module + """ + assert isinstance(mask, ModuleMasks) + if mask.input_mask is None: + in_channels = conv.in_channels + else: + in_channels_index = mask.input_mask.mask_index[1] + in_channels = in_channels_index.size()[0] + if mask.output_mask is None: + out_channels = conv.out_channels + else: + out_channels_index = mask.output_mask.mask_index[1] + out_channels = out_channels_index.size()[0] + new_conv = torch.nn.Conv2d(in_channels=in_channels, + out_channels=out_channels, + kernel_size=conv.kernel_size, + stride=conv.stride, + padding=conv.padding, + dilation=conv.dilation, + groups=1, # currently only support groups is 1 + bias=conv.bias is not None, + padding_mode=conv.padding_mode) + new_conv.to(conv.weight.device) + tmp_weight_data = tmp_bias_data = None + if mask.output_mask is not None: + tmp_weight_data = torch.index_select(conv.weight.data, 0, out_channels_index) + if conv.bias is not None: + tmp_bias_data = torch.index_select(conv.bias.data, 0, out_channels_index) + # NOTE: does not support group + if mask.input_mask is not None: + tmp_weight_data = torch.index_select(conv.weight.data if tmp_weight_data is None else tmp_weight_data, + 1, in_channels_index) + assert tmp_weight_data is not None, "Conv2d weight should be updated based on masks" + new_conv.weight.data.copy_(tmp_weight_data) + if conv.bias is not None: + print('final conv.bias is not None') + new_conv.bias.data.copy_(conv.bias.data if tmp_bias_data is None else tmp_bias_data) + return new_conv diff --git a/src/sdk/pynni/nni/compression/speedup/torch/compressor.py b/src/sdk/pynni/nni/compression/speedup/torch/compressor.py new file mode 100644 index 0000000000..1686a5c209 --- /dev/null +++ b/src/sdk/pynni/nni/compression/speedup/torch/compressor.py @@ -0,0 +1,493 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import logging +import queue +import re +import torch +from .compress_modules import replace_module +from .infer_shape import ModuleMasks, infer_from_mask, infer_from_inshape, infer_from_outshape + +_logger = logging.getLogger(__name__) + + +def get_module_by_name(model, module_name): + """ + Get a module specified by its module name + + Parameters + ---------- + model : pytorch model + the pytorch model from which to get its module + module_name : str + the name of the required module + + Returns + ------- + module, module + the parent module of the required module, the required module + """ + name_list = module_name.split(".") + for name in name_list[:-1]: + model = getattr(model, name) + leaf_module = getattr(model, name_list[-1]) + return model, leaf_module + +class GNode: + """ + It is used to represent a node in model graph, in this graph a module is a node, + a function out of module (in ```forward``` function) could also be a node. + """ + def __init__(self, node_name, node_type, op_type, inputs, outputs, nodes): + """ + Parameters + ---------- + node_name : str + It is module name if the node is a module, it is ```scope_name.node_kind.seq``` if it is a func + node_type : str + It only has two options: `module` or `func` + op_type : str + The operation type of the module or func + inputs : list of str + All the inputs of this node, each element is debugName of one input + outputs : list of str + All the outputs of this node, each element is debugName of one output + nodes : list of node + All the trace graph nodes included in this module or func + """ + self.name = node_name + self.type = node_type + self.op_type = op_type + self.inputs = inputs + self.outputs = outputs + self.nodes = nodes + # store supplementary information for different op types + # for example, for ```view``` it stores the shape of its input and output + self.auxiliary = None + +class ModelSpeedup: + """ + This class is to speedup the model with provided weight mask + """ + + def __init__(self, model, dummy_input, masks_file): + """ + Parameters + ---------- + model : pytorch model + The model user wants to speed up + dummy_input : pytorch tensor + The dummy input for ```jit.trace```, users should put it on right device before pass in + masks_file : str + The path of user provided mask file + """ + self.bound_model = model + self.dummy_input = dummy_input + self.masks = torch.load(masks_file) + self.is_training = model.training + # to obtain forward graph, model should be in ```eval``` mode + if self.is_training: + model.eval() + self.trace_graph = torch.jit.trace(model, dummy_input) + if self.is_training: + model.train() + self.inferred_masks = dict() # key: module_name, value: ModuleMasks + self.g_nodes = list() + self.global_count = 0 + self.name_to_gnode, self.input_to_gnode, self.output_to_gnode = self._build_graph() + + def _build_index_for_gnodes(self, g_nodes): + """ + Build indexes for quick search + + Parameters + ---------- + g_nodes : list of GNode + All the g_node in processed model graph + + Returns + ------- + dict + use name to index g_nodes, key: node name, value: g_node + dict + use input (its name) to index g_nodes, + key: input, value: list of g_nodes that take this input + dict + use output (its name) to index g_nodes, + key: output, value: g_node that generates this output + """ + name_to_gnode = dict() + input_to_gnode = dict() + output_to_gnode = dict() + for node in g_nodes: + name_to_gnode[node.name] = node + for _input in node.inputs: + if _input in input_to_gnode: + input_to_gnode[_input].append(node) + else: + input_to_gnode[_input] = [node] + for output in node.outputs: + assert not output in output_to_gnode, \ + "One output cannot be generated by multiple nodes" + output_to_gnode[output] = node + return name_to_gnode, input_to_gnode, output_to_gnode + + def _expand_non_prim_node(self, node, nodes, input_to_node, output_to_node): + """ + For trace graph nodes, some nodes are not in modules, these nodes are usually generated by + the functions directly called in module ```forward```. For such nodes, some of them are + trivial op which are label by ```prim::```, some of them are not such ops which is call + non-prim ops. This function is to merge neighbor prim ops to a non-prim op, to construct + a GNode. + + Parameters + ---------- + node : trace graph node + The non-prim node to expand + nodes : list of trace graph node + All the trace graph nodes within the same scope as the non-prim node + input_to_node : dict + key: input name, value: a node that uses this input + output_to_node : dict + key: output name, value: a node that generates this output + + Returns + ------- + GNode + the expanded non-prim node in GNode format + """ + # TODO: scope name could be empty + node_name = '.'.join([node.scopeName(), node.kind(), str(self.global_count)]) + #print('node_name: ', node_name) + self.global_count += 1 + op_type = node.kind() + + node_group = [node] + inputs = list() + outputs = list() + node_queue = queue.Queue() + node_queue.put(node) + while not node_queue.empty(): + curr_node = node_queue.get() + for _input in curr_node.inputs(): + input_name = _input.debugName() + if input_name in output_to_node and output_to_node[input_name] in nodes: + predecessor_node = output_to_node[input_name] + #print("predecessor_node: ", predecessor_node) + if predecessor_node.kind().startswith('prim::'): + node_group.append(predecessor_node) + node_queue.put(predecessor_node) + else: + inputs.append(input_name) + else: + inputs.append(input_name) + for output in node.outputs(): + outputs.append(output.debugName()) + g_node = GNode(node_name, 'func', op_type, inputs, outputs, node_group) + return g_node + + def _extract_shape_info(self, node): + """ + Extract the shape information of ```aten::view``` node + + Parameters + ---------- + node : trace graph node + It should be ```aten::view``` node + + Returns + ------- + dict + Include shape of input tensor and shape of output tensor + """ + t_input = None + for _input in node.inputs(): + t_input = _input + break + t_output = node.output() + assert isinstance(t_input.type(), torch._C.TensorType) + assert isinstance(t_output.type(), torch._C.TensorType) + in_shape = t_input.type().sizes() + out_shape = t_output.type().sizes() + return {'in_shape': in_shape, 'out_shape': out_shape} + + def _build_graph(self): + """ + Build graph using our defined format from jit trace. + There are basically three steps: first, construct necessary information (data structures), + second, extract all the modules to convert to GNode, Third, extract all functions to convert + to GNode. + + Returns + ------- + dict + use name to index g_nodes, key: node name, value: g_node + dict + use input (its name) to index g_nodes, + key: input, value: list of g_nodes that take this input + dict + use output (its name) to index g_nodes, + key: output, value: g_node that generates this output + """ + graph = self.trace_graph.graph + # if torch 1.4.0 is used, consider run torch._C._jit_pass_inline(graph) here + #print(graph) + # build output mapping, from output debugName to its node + output_to_node = dict() + # build input mapping, from input debugName to its node + input_to_node = dict() + # build module mapping, from module name to all nodes (as list) under this module scope + module_to_nodes = dict() + # module name to its type + module_to_type = dict() + # the mapping of function (non-module in forward) to nodes, key is scope name + func_to_nodes = dict() + + graph_inputs = list() + graph_outputs = list() + for _input in graph.inputs(): + graph_inputs.append(_input.debugName()) + for output in graph.outputs(): + graph_outputs.append(output.debugName()) + + for node in graph.nodes(): + # populate output_to_node and input_to_node + for output in node.outputs(): + output_name = output.debugName() + output_to_node[output_name] = node + for _input in node.inputs(): + input_name = _input.debugName() + input_to_node[input_name] = node + scope_name = node.scopeName() # example: scope_name, 'MyCell/Linear[linear]' + module_name_slices = re.findall(r'\[(.*?)\]', scope_name) + module_name = '.'.join(module_name_slices) + # if module_name is empty, it is not a module + if module_name == '': + if scope_name == '': + continue + else: + if scope_name in func_to_nodes: + func_to_nodes[scope_name].append(node) + else: + func_to_nodes[scope_name] = [node] + else: + scope_slice = scope_name.split('/')[-1] + module_type = scope_slice.split('[')[0] + module_to_type[module_name] = module_type + if module_name in module_to_nodes: + module_to_nodes[module_name].append(node) + else: + module_to_nodes[module_name] = [node] + + # construct GNode from module + for module_name, nodes in module_to_nodes.items(): + inputs = set() + outputs = set() + for node in nodes: + for output in node.outputs(): + outputs.add(output.debugName()) + for _input in node.inputs(): + inputs.add(_input.debugName()) + m_inputs = list() + m_outputs = list() + for output in outputs: + # TODO: one input could be the input of multiple nodes + if not output in input_to_node and output in graph_outputs: + m_outputs.append(output) + elif not input_to_node[output] in nodes: + m_outputs.append(output) + for _input in inputs: + if not _input in output_to_node and _input in graph_inputs: + m_inputs.append(_input) + elif not output_to_node[_input] in nodes: + m_inputs.append(_input) + print("module node_name: ", module_name) + if module_name == '': + for n in nodes: + print(n) + g_node = GNode(module_name, 'module', module_to_type[module_name], m_inputs, m_outputs, nodes) + self.g_nodes.append(g_node) + + # each scope_name may have multiple funcs, we split them and create GNode for each of them + for scope_name, nodes in func_to_nodes.items(): + # extract non prim:: nodes + non_prim_nodes = list() + for node in nodes: + if not node.kind().startswith('prim::'): + non_prim_nodes.append(node) + # for each non prim node, expand it has a GNode + for node in non_prim_nodes: + g_node = self._expand_non_prim_node(node, nodes, input_to_node, output_to_node) + self.g_nodes.append(g_node) + # get shape infor for view (aten::view) func + if g_node.op_type == 'aten::view': + g_node.auxiliary = self._extract_shape_info(node) + + # build index for g_nodes + name_to_gnode, input_to_gnode, output_to_gnode = self._build_index_for_gnodes(self.g_nodes) + + return name_to_gnode, input_to_gnode, output_to_gnode + + def _find_predecessors(self, module_name): + """ + Find predecessor GNode of the given GNode + + Parameters + ---------- + module_name : str + The name of the GNode + + Returns + ------- + list + a list of GNodes who are the given GNode's predecessor + """ + predecessors = [] + for _input in self.name_to_gnode[module_name].inputs: + if not _input in self.output_to_gnode: + print(_input) + if not _input in self.output_to_gnode: + # TODO: check _input which does not have node + print("output with no gnode: ", _input) + else: + g_node = self.output_to_gnode[_input] + predecessors.append(g_node.name) + return predecessors + + def _find_successors(self, module_name): + """ + Find successor GNodes of the given GNode + + Parameters + ---------- + module_name : str + The name of the GNode + + Returns + ------- + list + a list of GNodes who are the given GNode's successor + """ + successors = [] + for output in self.name_to_gnode[module_name].outputs: + assert output in self.input_to_gnode, "No gnode with input {}".format(output) + g_nodes = self.input_to_gnode[output] + for g_node in g_nodes: + successors.append(g_node.name) + return successors + + def infer_module_mask(self, module_name, mask=None, in_shape=None, out_shape=None): + """ + Infer input shape / output shape based on the module's weight mask / input shape / output shape. + + For a module: + Infer its input and output shape from its weight mask + Infer its output shape from its input shape + Infer its input shape from its output shape + + If its input shape is changed, continue infering its predecessors + If its output shape is changed, continue infering its successors + + Parameters + ---------- + module_name : str + The name of the GNode + mask : tensor of mask or ModuleMasks + Mask of the weights in this GNode (i.e., module) + in_shape : ModuleMasks + Input shape of this GNode + out_shape : ModuleMasks + Output shape of this GNode + """ + input_cmask = output_cmask = None + if module_name in self.inferred_masks: + module_masks = self.inferred_masks[module_name] + else: + module_masks = ModuleMasks(module_name) + self.inferred_masks[module_name] = module_masks + + m_type = self.name_to_gnode[module_name].op_type + print("infer_module_mask: {}, module type: {}".format(module_name, m_type)) + if mask is not None: + #print("mask is not None") + if not m_type in infer_from_mask: + raise RuntimeError("Has not supported infering \ + input/output shape from mask for module/function: `{}`".format(m_type)) + input_cmask, output_cmask = infer_from_mask[m_type](module_masks, mask) + if in_shape is not None: + #print("in_shape is not None") + if not m_type in infer_from_inshape: + raise RuntimeError("Has not supported infering \ + output shape from input shape for module/function: `{}`".format(m_type)) + if m_type == 'aten::view': + output_cmask = infer_from_inshape[m_type](module_masks, + in_shape, + self.name_to_gnode[module_name].auxiliary) + else: + output_cmask = infer_from_inshape[m_type](module_masks, in_shape) + if out_shape is not None: + #print("out_shape is not None") + if not m_type in infer_from_outshape: + raise RuntimeError("Has not supported infering \ + input shape from output shape for module/function: `{}`".format(m_type)) + input_cmask = infer_from_outshape[m_type](module_masks, out_shape) + + if input_cmask: + #print("input_cmask is not None") + predecessors = self._find_predecessors(module_name) + for _module_name in predecessors: + print("input_cmask, module_name: ", _module_name) + self.infer_module_mask(_module_name, out_shape=input_cmask) + if output_cmask: + #print("output_cmask is not None") + successors = self._find_successors(module_name) + for _module_name in successors: + print("output_cmask, module_name: ", _module_name) + self.infer_module_mask(_module_name, in_shape=output_cmask) + + def infer_modules_masks(self): + """ + Do shape inference of involved modules, including the shape of weights, inputs, output + """ + for module_name, mask in self.masks.items(): + self.infer_module_mask(module_name, mask=mask) + + def replace_compressed_modules(self): + """ + Replace all the modules that have changed (weights/inputs/output) shape. + The new module is created using the same arguments of the to-be-replaced module, + and correctly inherits its weights. + + NOTE: ```func``` type cannot be replaced as it is not a module, thus, one limitation + is that ```func``` should be not required to be replaced. + """ + for module_name in self.inferred_masks: + g_node = self.name_to_gnode[module_name] + print(module_name, g_node.op_type) + if g_node.type == 'module': + super_module, leaf_module = get_module_by_name(self.bound_model, module_name) + m_type = g_node.op_type + if not m_type in replace_module: + raise RuntimeError("Has not supported replacing the module: `{}`".format(m_type)) + compressed_module = replace_module[m_type](leaf_module, self.inferred_masks[module_name]) + setattr(super_module, module_name.split('.')[-1], compressed_module) + elif g_node.type == 'func': + print("Warning: Cannot replace func...") + else: + raise RuntimeError("Unsupported GNode type: {}".format(g_node.type)) + + def speedup_model(self): + """ + There are basically two steps: + first, do mask/shape inference, + second, replace modules + """ + #print("start to compress") + self.infer_modules_masks() + self.replace_compressed_modules() + #print("finished compressing") + # resume the model mode to that before the model is speed up + if self.is_training: + self.bound_model.train() + else: + self.bound_model.eval() \ No newline at end of file diff --git a/src/sdk/pynni/nni/compression/speedup/torch/infer_shape.py b/src/sdk/pynni/nni/compression/speedup/torch/infer_shape.py new file mode 100644 index 0000000000..995dcf997f --- /dev/null +++ b/src/sdk/pynni/nni/compression/speedup/torch/infer_shape.py @@ -0,0 +1,481 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +""" +For each operation or module, there are two functions. +One is given output shape, infer its input shape and initialization parameters (e.g., weight's shape) +The other is given input shape, infer its output shape and initialization parameters (e.g., weight's shape) +""" + +import torch + +class CoarseMask: + """ + Coarse grained mask for a given tensor, here tensor could be weights, + input tensor, or output tensor + """ + def __init__(self, num_dim): + """ + Parameters + ---------- + num_dim : int + The number of dimensions of the tensor that will be masked + """ + self.mask_index = [None for _ in range(num_dim)] + + def add_index_mask(self, dim, index): + """ + Add mask for the specified dimension + + Parameters + ---------- + dim : int + The dimension to add mask + index : tensor + The mask for this dimension, its a 1 dimension tensor which specifies + the index of the elements that are not pruned + """ + self.mask_index[dim] = index + + @staticmethod + def merge_index(index_a, index_b): + """ + Parameters + ---------- + index_a : tensor + One index (1-dimension) tensor + index_b : tensor + The other index (1-dimension) tensor + + Returns + ------- + tensor + The merged index (1-dimension) tensor + """ + s = set() + for num in index_a: + s.add(num) + for num in index_b: + s.add(num) + return torch.tensor(sorted(s)) + + def merge(self, cmask): + """ + Merge another CoarseMask + + Parameters + ---------- + cmask : CoarseMask + Another CoarseMask to merge + + Returns + ------- + list + The member variable ```mask_index``` + """ + assert isinstance(cmask, CoarseMask) + assert len(self.mask_index) == len(cmask.mask_index), \ + "Only masks with the same number of dimensions can be merged" + for i, index in enumerate(self.mask_index): + if index is None: + self.mask_index[i] = cmask.mask_index[i] + elif cmask.mask_index[i] is not None: + self.mask_index[i] = CoarseMask.merge_index(self.mask_index[i], + cmask.mask_index[i]) + return self.mask_index + +class ModuleMasks: + """ + The masks of a module, including the masks for weights, inputs, output + """ + def __init__(self, module_name): + """ + Parameters + ---------- + module_name : str + The name of the module or function + """ + self.module_name = module_name + self.param_masks = dict() + self.input_mask = None + self.output_mask = None + + def set_param_masks(self, name, mask): + """ + Parameters + ---------- + name : str + The name of the weight + mask : CoarseMask + The mask for this weight + """ + self.param_masks[name] = mask + + def set_input_mask(self, mask): + """ + Parameters + ---------- + mask : CoarseMask + The mask for input + """ + self.input_mask = mask + + def set_output_mask(self, mask): + """ + Parameters + ---------- + mask : CoarseMask + The mask for output + """ + self.output_mask = mask + +""" +Infer input and output shape of a module/function from its weight mask +""" +infer_from_mask = { + 'BatchNorm2d': lambda module_masks, mask: batchnorm2d_mask(module_masks, mask), + 'Conv2d': lambda module_masks, mask: conv2d_mask(module_masks, mask) +} + +""" +Infer output and weight shape of a module/function from its input shape +""" +infer_from_inshape = { + 'ReLU': lambda module_masks, mask: relu_inshape(module_masks, mask), + 'aten::relu': lambda module_masks, mask: relu_inshape(module_masks, mask), + 'Conv2d': lambda module_masks, mask: conv2d_inshape(module_masks, mask), + 'MaxPool2d': lambda module_masks, mask: maxpool2d_inshape(module_masks, mask), + 'aten::max_pool2d': lambda module_masks, mask: maxpool2d_inshape(module_masks, mask), + 'aten::avg_pool2d': lambda module_masks, mask: maxpool2d_inshape(module_masks, mask), + 'AvgPool2d': lambda module_masks, mask: maxpool2d_inshape(module_masks, mask), + 'aten::size': lambda module_masks, mask: size_inshape(module_masks, mask), + 'aten::view': lambda module_masks, mask, shape: view_inshape(module_masks, mask, shape), + 'Linear': lambda module_masks, mask: linear_inshape(module_masks, mask), + 'BatchNorm2d': lambda module_masks, mask: batchnorm2d_inshape(module_masks, mask) +} + +""" +Infer input and weight shape of a module/function from its output shape +""" +infer_from_outshape = { + 'Conv2d': lambda module_masks, mask: conv2d_outshape(module_masks, mask) +} + +def batchnorm2d_inshape(module_masks, mask): + """ + We assume only the second dimension has coarse grained mask + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the batchnorm2d + mask : CoarseMask + The mask of its input tensor + + Returns + ------- + CoarseMask + The mask of its output tensor + """ + assert isinstance(mask, CoarseMask) + assert mask.mask_index[1] is not None + assert mask.mask_index[0] is None + assert mask.mask_index[2] is None + assert mask.mask_index[3] is None + module_masks.set_input_mask(mask) + module_masks.set_output_mask(mask) + weight_cmask = CoarseMask(num_dim=1) + weight_cmask.add_index_mask(dim=0, index=mask.mask_index[1]) + module_masks.set_param_masks('weight', weight_cmask) + module_masks.set_param_masks('bias', weight_cmask) + return mask + +def linear_inshape(module_masks, mask): + """ + Coarse grained input mask does not change the shape of weights and output tensor + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the linear + mask : CoarseMask + The mask of its input tensor + + Returns + ------- + CoarseMask + The mask of its output tensor, ```None``` means shape of output tensor is not changed + """ + assert isinstance(mask, CoarseMask) + assert mask.mask_index[0] is None + assert module_masks.input_mask is None + module_masks.set_input_mask(mask) + return None + +def view_inshape(module_masks, mask, shape): + """ + This is a limited support + + TODO: consider replace tensor.view with nn.Flatten, because tensor.view is not + included in module, thus, cannot be replaced by our framework. + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the ```view``` op + mask : CoarseMask + The mask of its input tensor + shape : dict + Original shape of its input and output tensors + + Returns + ------- + CoarseMask + The mask of its output tensor + """ + # NOTE: the case constrained by the following four asserts + assert shape['in_shape'][0] == shape['out_shape'][0] + assert len(shape['in_shape']) == 4 + assert len(shape['out_shape']) == 2 + assert shape['out_shape'][1] == shape['in_shape'][1]*shape['in_shape'][2]*shape['in_shape'][3] + + assert isinstance(mask, CoarseMask) + assert mask.mask_index[1] is not None + assert mask.mask_index[0] is None + assert mask.mask_index[2] is None + assert mask.mask_index[3] is None + assert module_masks.input_mask is None + module_masks.set_input_mask(mask) + output_cmask = CoarseMask(num_dim=2) + index = [] + step_size = shape['in_shape'][2] * shape['in_shape'][3] + for loc in mask.mask_index[1]: + index.extend([loc * step_size + i for i in range(step_size)]) + output_cmask.add_index_mask(dim=1, index=torch.tensor(index)) + module_masks.set_output_mask(output_cmask) + return output_cmask + + +def size_inshape(module_masks, mask): + """ + No need to do anything for this ```size``` op + """ + return None + +def maxpool2d_inshape(module_masks, mask): + """ + Assume only the second dimension is masked + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the maxpool2d + mask : CoarseMask + The mask of its input tensor + + Returns + ------- + CoarseMask + The mask of its output tensor + """ + assert isinstance(mask, CoarseMask) + assert mask.mask_index[1] is not None + assert mask.mask_index[0] is None + assert mask.mask_index[2] is None + assert mask.mask_index[3] is None + assert module_masks.input_mask is None + module_masks.set_input_mask(mask) + module_masks.set_output_mask(mask) + return mask + +def relu_inshape(module_masks, mask): + """ + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the relu + mask : CoarseMask + The mask of its input tensor + + Returns + ------- + CoarseMask + The mask of its output tensor + """ + assert isinstance(mask, CoarseMask) + # TODO: double check this assert, is it possible that a module is passed twice + assert module_masks.input_mask is None, "A relu op can only be processed once" + module_masks.set_input_mask(mask) + module_masks.set_output_mask(mask) + return mask + +def batchnorm2d_mask(module_masks, mask): + """ + Infer input and output shape from weight mask + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the batchnorm2d + mask : dict + The mask of its weights, from the user provided mask file + + Returns + ------- + CoarseMask, CoarseMask + The mask of its input tensor, the mask of its output tensor + """ + assert 'weight' in mask and 'bias' in mask + sum_mask = mask['weight'] + mask['bias'] + nonzero_index = torch.nonzero(sum_mask, as_tuple=True)[0] + # infer shape of parameters + param_cmask = CoarseMask(num_dim=1) + param_cmask.add_index_mask(dim=0, index=nonzero_index) + module_masks.set_param_masks('weight', param_cmask) + module_masks.set_param_masks('bias', param_cmask) + # infer shape of input tensor + input_cmask = CoarseMask(num_dim=4) + input_cmask.add_index_mask(dim=1, + index=torch.nonzero(mask['weight'], as_tuple=True)[0]) + module_masks.set_input_mask(input_cmask) + # infer shape of output tensor + output_cmask = CoarseMask(num_dim=4) + output_cmask.add_index_mask(dim=1, index=nonzero_index) + module_masks.set_output_mask(output_cmask) + return input_cmask, output_cmask + +def conv2d_mask(module_masks, mask): + """ + Infer input and output shape from weight mask + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the conv2d + mask : dict + The mask of its weights, from the user provided mask file + + Returns + ------- + CoarseMask, CoarseMask + The mask of its input tensor, the mask of its output tensor + """ + def convert_to_coarse_mask(mask): + """ + Parameters + ---------- + mask : dict + Weight mask from user provided mask file + + Returns + ------- + LongTensor, CoarseMask, CoarseMask + Index of the masked dimension, weight mask, bias mask + """ + assert 'weight' in mask + assert isinstance(mask['weight'], torch.Tensor) + cmask = None + weight_mask = mask['weight'] + shape = weight_mask.size() + ones = torch.ones(shape[1:]).to(weight_mask.device) + zeros = torch.zeros(shape[1:]).to(weight_mask.device) + index = [] + for i in range(shape[0]): + if torch.all(torch.eq(weight_mask[i], ones)): + index.append(i) + elif torch.all(torch.eq(weight_mask[i], zeros)): + continue + else: + index = None + break + if index is None: + return None, None, None + else: + index = torch.LongTensor(index).to(weight_mask.device) + weight_cmask = CoarseMask(num_dim=4) + weight_cmask.add_index_mask(dim=0, index=index) + bias_cmask = None + if 'bias' in mask and mask['bias'] is not None: + bias_index = torch.nonzero(mask['bias'], as_tuple=True)[0] + assert torch.all(torch.eq(index, bias_index)), \ + "bias mask should be consistent with weight mask" + bias_cmask = CoarseMask(num_dim=1) + bias_cmask.add_index_mask(dim=0, index=bias_index) + return index, weight_cmask, bias_cmask + index, weight_cmask, bias_cmask = convert_to_coarse_mask(mask) + if index is None: + # TODO: fine grained mask speedup + return None, None + # deal with coarse grain mask + if 'weight' in module_masks.param_masks: + module_masks.param_masks['weight'].merge(weight_cmask) + module_masks.param_masks['bias'].merge(bias_cmask) + else: + module_masks.set_param_masks('weight', weight_cmask) + module_masks.set_param_masks('bias', bias_cmask) + output_cmask = CoarseMask(num_dim=4) + output_cmask.add_index_mask(dim=1, index=index) + if module_masks.output_mask is None: + module_masks.set_output_mask(output_cmask) + else: + module_masks.output_mask.merge(output_cmask) + return None, module_masks.output_mask + +def conv2d_inshape(module_masks, mask): + """ + Shape change of input tensor does not affect the shape of its output tensor + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the conv2d + mask : CoarseMask + The mask of its input tensor + + Returns + ------- + CoarseMask + The mask of its output tensor + """ + assert isinstance(mask, CoarseMask) + assert module_masks.input_mask is None + module_masks.set_input_mask(mask) + return None + +def conv2d_outshape(module_masks, mask): + """ + Assume only the second dimension is masked + + Parameters + ---------- + module_masks : ModuleMasks + The ModuleMasks instance of the conv2d + mask : CoarseMask + The mask of its output tensor + + Returns + ------- + CoarseMask + The mask of its input tensor + """ + assert isinstance(mask, CoarseMask) + assert mask.mask_index[1] is not None + assert mask.mask_index[0] is None + assert mask.mask_index[2] is None + assert mask.mask_index[3] is None + + if module_masks.output_mask is not None: + assert isinstance(module_masks.output_mask, CoarseMask) + # set shape of output + mask = module_masks.output_mask.merge(mask) + else: + module_masks.output_mask = mask + # infer shape of parameters + weight_cmask = CoarseMask(num_dim=4) + weight_cmask.add_index_mask(dim=0, index=mask.mask_index[1]) + bias_cmask = CoarseMask(num_dim=1) + bias_cmask.add_index_mask(dim=0, index=mask.mask_index[1]) + module_masks.set_param_masks('weight', weight_cmask) + module_masks.set_param_masks('bias', bias_cmask) + # input shape is not changed + return None + \ No newline at end of file diff --git a/src/sdk/pynni/nni/compression/torch/__init__.py b/src/sdk/pynni/nni/compression/torch/__init__.py index d79a8f76c4..432cdf1529 100644 --- a/src/sdk/pynni/nni/compression/torch/__init__.py +++ b/src/sdk/pynni/nni/compression/torch/__init__.py @@ -6,3 +6,4 @@ from .weight_rank_filter_pruners import * from .activation_rank_filter_pruners import * from .quantizers import * +from .apply_compression import apply_compression_results diff --git a/src/sdk/pynni/nni/compression/torch/apply_compression.py b/src/sdk/pynni/nni/compression/torch/apply_compression.py new file mode 100644 index 0000000000..2531da5039 --- /dev/null +++ b/src/sdk/pynni/nni/compression/torch/apply_compression.py @@ -0,0 +1,70 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import logging +import torch +from .compressor import Pruner + +logger = logging.getLogger('torch apply compression') + +def apply_compression_results(model, masks_file): + """ + Apply the masks from ```masks_file``` to the model + + Parameters + ---------- + model : torch.nn.module + The model to be compressed + masks_file : str + The path of the mask file + """ + apply_comp = ApplyCompression(model, masks_file) + apply_comp.compress() + +class ApplyCompression(Pruner): + """ + This class is not to generate masks, but applying existing masks + """ + + def __init__(self, model, masks_file): + """ + Parameters + ---------- + model : torch.nn.module + Model to be masked + masks_file : str + The path of user provided mask file + """ + self.bound_model = model + self.masks = torch.load(masks_file) + for module_name in self.masks: + print('module_name: ', module_name) + config_list = self._build_config() + super().__init__(model, config_list) + + def _build_config(self): + op_names = [] + for module_name in self.masks: + op_names.append(module_name) + return [{'sparsity': 1, 'op_types': ['default', 'BatchNorm2d'], 'op_names': op_names}] + + def calc_mask(self, layer, config, **kwargs): + """ + Directly return the corresponding mask + + Parameters + ---------- + layer : LayerInfo + The layer to be pruned + config : dict + Pruning configurations for this weight + kwargs : dict + Auxiliary information + + Returns + ------- + dict + Mask of the layer + """ + assert layer.name in self.masks + return self.masks[layer.name] From fdcd877f1e9e5aa22f0d9e169e1cdb5c186bf7b7 Mon Sep 17 00:00:00 2001 From: SparkSnail Date: Mon, 10 Feb 2020 17:52:04 +0800 Subject: [PATCH 7/9] Fix remote pipeline (#2023) --- deployment/docker/Dockerfile | 2 +- test/pipelines-it-remote.yml | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/deployment/docker/Dockerfile b/deployment/docker/Dockerfile index b2c417e1b1..e89cf1def4 100644 --- a/deployment/docker/Dockerfile +++ b/deployment/docker/Dockerfile @@ -52,7 +52,7 @@ RUN python3 -m pip --no-cache-dir install Keras==2.1.6 # PyTorch # RUN python3 -m pip --no-cache-dir install torch==1.2.0 -RUN python3 -m pip install torchvision==0.4.0 +RUN python3 -m pip install torchvision==0.5.0 # # sklearn 0.20.0 diff --git a/test/pipelines-it-remote.yml b/test/pipelines-it-remote.yml index 8fe96552fb..8b19cbd5f4 100644 --- a/test/pipelines-it-remote.yml +++ b/test/pipelines-it-remote.yml @@ -20,31 +20,31 @@ jobs: displayName: 'Install dependencies for integration tests in remote mode' - task: CopyFilesOverSSH@0 inputs: - sshEndpoint: remote_nni-ci-gpu-01 + sshEndpoint: $(end_point) sourceFolder: deployment/pypi/dist/ targetFolder: /tmp/nnitest/$(Build.BuildId)/dist overwrite: true displayName: 'Copy dist files to remote machine' - task: CopyFilesOverSSH@0 inputs: - sshEndpoint: remote_nni-ci-gpu-01 + sshEndpoint: $(end_point) sourceFolder: test targetFolder: /tmp/nnitest/$(Build.BuildId)/test overwrite: true displayName: 'Copy test files to remote machine' - task: SSH@0 inputs: - sshEndpoint: remote_nni-ci-gpu-01 + sshEndpoint: $(end_point) runOptions: commands commands: python3 /tmp/nnitest/$(Build.BuildId)/test/remote_docker.py --mode start --name $(Build.BuildId) --image nni/nni displayName: 'Start docker' - task: DownloadSecureFile@1 inputs: - secureFile: remote_ci_private_key + secureFile: $(remote_private_key) - script: | - cp $(Agent.TempDirectory)/remote_ci_private_key test/id_rsa + cp $(Agent.TempDirectory)/$(remote_private_key) test/id_rsa chmod 600 test/id_rsa - scp -i test/id_rsa $(remote_user)@$(remote_host):/tmp/nnitest/$(Build.BuildId)/port test/port + scp -P $(remote_port) -i test/id_rsa $(remote_user)@$(remote_host):/tmp/nnitest/$(Build.BuildId)/port test/port cat test/port displayName: 'Get docker port' - script: | From affb21187f7aaaed8c164c7f4b9c77e212ae3c5d Mon Sep 17 00:00:00 2001 From: QuanluZhang Date: Mon, 10 Feb 2020 20:19:07 +0800 Subject: [PATCH 8/9] support proxylessnas with NNI NAS APIs (#1863) --- docs/en_US/NAS/Overview.md | 1 + docs/en_US/NAS/Proxylessnas.md | 63 +++ docs/en_US/nas.rst | 1 + docs/img/proxylessnas.png | Bin 0 -> 26933 bytes examples/nas/proxylessnas/datasets.py | 188 +++++++ examples/nas/proxylessnas/main.py | 105 ++++ examples/nas/proxylessnas/model.py | 131 +++++ examples/nas/proxylessnas/ops.py | 329 ++++++++++++ examples/nas/proxylessnas/putils.py | 67 +++ examples/nas/proxylessnas/retrain.py | 183 +++++++ src/sdk/pynni/nni/nas/pytorch/base_mutator.py | 4 + .../nni/nas/pytorch/proxylessnas/__init__.py | 2 + .../nni/nas/pytorch/proxylessnas/mutator.py | 476 +++++++++++++++++ .../nni/nas/pytorch/proxylessnas/trainer.py | 500 ++++++++++++++++++ .../nni/nas/pytorch/proxylessnas/utils.py | 78 +++ 15 files changed, 2128 insertions(+) create mode 100644 docs/en_US/NAS/Proxylessnas.md create mode 100644 docs/img/proxylessnas.png create mode 100644 examples/nas/proxylessnas/datasets.py create mode 100644 examples/nas/proxylessnas/main.py create mode 100644 examples/nas/proxylessnas/model.py create mode 100644 examples/nas/proxylessnas/ops.py create mode 100644 examples/nas/proxylessnas/putils.py create mode 100644 examples/nas/proxylessnas/retrain.py create mode 100644 src/sdk/pynni/nni/nas/pytorch/proxylessnas/__init__.py create mode 100644 src/sdk/pynni/nni/nas/pytorch/proxylessnas/mutator.py create mode 100644 src/sdk/pynni/nni/nas/pytorch/proxylessnas/trainer.py create mode 100644 src/sdk/pynni/nni/nas/pytorch/proxylessnas/utils.py diff --git a/docs/en_US/NAS/Overview.md b/docs/en_US/NAS/Overview.md index 5e63acc76b..1a325d911f 100644 --- a/docs/en_US/NAS/Overview.md +++ b/docs/en_US/NAS/Overview.md @@ -19,6 +19,7 @@ NNI supports below NAS algorithms now and is adding more. User can reproduce an | [P-DARTS](PDARTS.md) | [Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) is based on DARTS. It introduces an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. | | [SPOS](SPOS.md) | [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) constructs a simplified supernet trained with an uniform path sampling method, and applies an evolutionary algorithm to efficiently search for the best-performing architectures. | | [CDARTS](CDARTS.md) | [Cyclic Differentiable Architecture Search](https://arxiv.org/abs/****) builds a cyclic feedback mechanism between the search and evaluation networks. It introduces a cyclic differentiable architecture search framework which integrates the two networks into a unified architecture.| +| [ProxylessNAS](Proxylessnas.md) | [ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https://arxiv.org/abs/1812.00332).| One-shot algorithms run **standalone without nnictl**. Only PyTorch version has been implemented. Tensorflow 2.x will be supported in future release. diff --git a/docs/en_US/NAS/Proxylessnas.md b/docs/en_US/NAS/Proxylessnas.md new file mode 100644 index 0000000000..9c913203d8 --- /dev/null +++ b/docs/en_US/NAS/Proxylessnas.md @@ -0,0 +1,63 @@ +# ProxylessNAS on NNI + +## Introduction + +The paper [ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https://arxiv.org/pdf/1812.00332.pdf) removes proxy, it directly learns the architectures for large-scale target tasks and target hardware platforms. They address high memory consumption issue of differentiable NAS and reduce the computational cost to the same level of regular training while still allowing a large candidate set. Please refer to the paper for the details. + +## Usage + +To use ProxylessNAS training/searching approach, users need to specify search space in their model using [NNI NAS interface](NasGuide.md), e.g., `LayerChoice`, `InputChoice`. After defining and instantiating the model, the following work can be leaved to ProxylessNasTrainer by instantiating the trainer and passing the model to it. +```python +trainer = ProxylessNasTrainer(model, + model_optim=optimizer, + train_loader=data_provider.train, + valid_loader=data_provider.valid, + device=device, + warmup=True, + ckpt_path=args.checkpoint_path, + arch_path=args.arch_path) +trainer.train() +trainer.export(args.arch_path) +``` +The complete example code can be found [here](https://github.com/microsoft/nni/tree/master/examples/nas/proxylessnas). + +**Input arguments of ProxylessNasTrainer** + +* **model** (*PyTorch model, required*) - The model that users want to tune/search. It has mutables to specify search space. +* **model_optim** (*PyTorch optimizer, required*) - The optimizer users want to train the model. +* **device** (*device, required*) - The devices that users provide to do the train/search. The trainer applies data parallel on the model for users. +* **train_loader** (*PyTorch data loader, required*) - The data loader for training set. +* **valid_loader** (*PyTorch data loader, required*) - The data loader for validation set. +* **label_smoothing** (*float, optional, default = 0.1*) - The degree of label smoothing. +* **n_epochs** (*int, optional, default = 120*) - The number of epochs to train/search. +* **init_lr** (*float, optional, default = 0.025*) - The initial learning rate for training the model. +* **binary_mode** (*'two', 'full', or 'full_v2', optional, default = 'full_v2'*) - The forward/backward mode for the binary weights in mutator. 'full' means forward all the candidate ops, 'two' means only forward two sampled ops, 'full_v2' means recomputing the inactive ops during backward. +* **arch_init_type** (*'normal' or 'uniform', optional, default = 'normal'*) - The way to init architecture parameters. +* **arch_init_ratio** (*float, optional, default = 1e-3*) - The ratio to init architecture parameters. +* **arch_optim_lr** (*float, optional, default = 1e-3*) - The learning rate of the architecture parameters optimizer. +* **arch_weight_decay** (*float, optional, default = 0*) - Weight decay of the architecture parameters optimizer. +* **grad_update_arch_param_every** (*int, optional, default = 5*) - Update architecture weights every this number of minibatches. +* **grad_update_steps** (*int, optional, default = 1*) - During each update of architecture weights, the number of steps to train architecture weights. +* **warmup** (*bool, optional, default = True*) - Whether to do warmup. +* **warmup_epochs** (*int, optional, default = 25*) - The number of epochs to do during warmup. +* **arch_valid_frequency** (*int, optional, default = 1*) - The frequency of printing validation result. +* **load_ckpt** (*bool, optional, default = False*) - Whether to load checkpoint. +* **ckpt_path** (*str, optional, default = None*) - checkpoint path, if load_ckpt is True, ckpt_path cannot be None. +* **arch_path** (*str, optional, default = None*) - The path to store chosen architecture. + + +## Implementation + +The implementation on NNI is based on the [offical implementation](https://github.com/mit-han-lab/ProxylessNAS). The official implementation supports two training approaches: gradient descent and RL based, and support different targeted hardware, including 'mobile', 'cpu', 'gpu8', 'flops'. In our current implementation on NNI, gradient descent training approach is supported, but has not supported different hardwares. The complete support is ongoing. + +Below we will describe implementation details. Like other one-shot NAS algorithms on NNI, ProxylessNAS is composed of two parts: *search space* and *training approach*. For users to flexibly define their own search space and use built-in ProxylessNAS training approach, we put the specified search space in [example code](https://github.com/microsoft/nni/tree/master/examples/nas/proxylessnas) using [NNI NAS interface](NasGuide.md), and put the training approach in [SDK](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch/proxylessnas). + +![](../../img/proxylessnas.png) + +ProxylessNAS training approach is composed of ProxylessNasMutator and ProxylessNasTrainer. ProxylessNasMutator instantiates MixedOp for each mutable (i.e., LayerChoice), and manage architecture weights in MixedOp. **For DataParallel**, architecture weights should be included in user model. Specifically, in ProxylessNAS implementation, we add MixedOp to the corresponding mutable (i.e., LayerChoice) as a member variable. The mutator also exposes two member functions, i.e., `arch_requires_grad`, `arch_disable_grad`, for the trainer to control the training of architecture weights. + +ProxylessNasMutator also implements the forward logic of the mutables (i.e., LayerChoice). + +## Reproduce Results + +Ongoing... diff --git a/docs/en_US/nas.rst b/docs/en_US/nas.rst index b04f3a9e70..73b6aad0e5 100644 --- a/docs/en_US/nas.rst +++ b/docs/en_US/nas.rst @@ -24,4 +24,5 @@ For details, please refer to the following tutorials: P-DARTS SPOS CDARTS + ProxylessNAS API Reference diff --git a/docs/img/proxylessnas.png b/docs/img/proxylessnas.png new file mode 100644 index 0000000000000000000000000000000000000000..274e1dbd5b63e9142783baaf3b2ac7131047c6fb GIT binary patch literal 26933 zcmeFZbx>7b+b|3wAs`*nUD6yuLZk&mN~K%ifP_e=bT>$+ASr#0lyui2ly0OE6gYIh z`|!JO#pikMXXgFqo9~3o4b0bvX_no(LTJ}gtIBkeO_q@ajmx zg5fh_Vq>)!%C0gU>?&4P%{PVYS&OgU-#h(6<4@F@)fpj%K_ywhvGJKDGBh&ESO_v& zAiO_9kf0`0GhTn>RyH$Oc?sKUFk78;A9UPZGTP?1EYX63OWgI`54T+B;9Uu(TEM?C z$L;|bgrg6_v8ioh21X1eIC4TjP{dH*7m~^3>mgIUur1j25TsPb3H;C6ae5J`yv@0W zSLZ$Ba9)uY2}bNa16HxvM8L1kEpe^D5=LMGp9=8n9u&ottSWX3#@r(g{4$2&#Fg4t=i*I`*NvoX#sG%uKM;w8L^J~dn`5rVBF?*x8t z@*!#SJ89#UatEdlDdaW7+*MW5Npu3e8)*+wjpRs(qh z$!GpzXJcLx-y&{+;V`N-Uf@qU@)&gPWd`*rzuRsiY6t>AVwHz?{Bi>MYM{DLYdo!5 z;=Z)xmS(XL&2gRG6luJe)_0$8;?gnuZdk7PF!lMr*&fHl;fxL2P|h?fug}!*DOx3C zpQLJ@s{&0&t{a(edN)Jwq}xi0T`x zhOS};IEnPfph*Y3iD@VB^lrxX$-w%u!HG|GZ=Pd|%Vng^mB7}z@>STCht04{!ADwn znsZNoQowJ0Ful`9gA9WS`@2 zgkDUuNtLX9xvCq==g9HEd+1ZaZn|a#>x%e zfBsTun)}enDs(AA*j*+8Pm10ma0s_R{nO$~^1Wnp9`d%*C;0VLX)i_+WZ51}Pb29h zMp?i8amSj#>~5?Ot0-RjAhkI9VPK25`TZ#vk^02T2CTRU#kF)dV=&tvnri~T2Urm# z5mu-P(BX9EX**I%*AHMt6_8aL3$+`>t=JKySUd5U8#Tyonc^LNJ)R>ETq!YeV=aER z0_8B6Mr77<(=YZ5bn)m@#mail^BiK!)!Pa!4OTE1smy035^bMaq2K~Dry`6_j!pdzd0;&5yBm_)4ZzYH-Uc4yN ztSHZJ&-Nvc7ULq|&{BZIz3I&v7tG9A9}<)){D4vg6y9-z9 zvym4SwbhMz&(NFXcteQy^mio%?!c0l7z{NH2!?CHBXu8{S{Lb3YXl`QS{x^+qBR&! zYQh7Aru*}7rz6!^MM{#M0L7v1N=tXeVK6{9I#<`_~XYIeHeV&0Yf)?GL(Sa-!e zH-x=EhS3Z9K|-XkBm*X&;YA_~al24;*fZp|THT7yyfpkyA3-0}^v3lrF32BeutY|o zH?TKlL3ioiCNGl9mfe^NC}?bJ0&OjQh3xTWGT=R+nF$Y z=U{B1{>So?Y0Ikq5V-Wiw?W-c?ib4W{L>8*CV`Bi6W>klE>y?~58^X=G8tBXq)6qI zrGPhg6OW;wB^wV*+BY|Z^_a)X%I#@R*!0ah*+lQ~B!>SSRc1Yklp5HmMQ`W6^*P^4 zWF^IPxlCu^-2qPM+oryfd|npSKYo;_{^?NJ1}6&kx+4*Zrj&ihEnU7`-U;KJ!C%>d zq;d(x4`)=%0=y6D*OEEA!_*q2HcOs!ux=Vebz{rBmACbfH~Ymq{3z z&=y`2o%1j&zojtK@$)`rgYoxKHaURXED;Fg*yCb^rco%-u!0^0pR=-V%xE+e zGTTYUzJcAp;Z?EApx={(m@xPRB{c=nGyTY7D)h5o-u=elx8;j!zbKha%ebNVbnH7* zZeG_SQ^nl(qf;RQx(!T8Zhm*)=B-arn0>4LN(s?nL-KOePB`%HJnx7NQ#>&HR%H%4 z1(Ll$mBfVcGG$s>H`3_%9){B0PnZ*<4K42p-HRP1<;&jh3Vd@HZ6;$xMVG$(Vl*=z zDR2ZXeM-;t5S=euKh^lg|Hy2O_tyvOVhHMmRfLSyyH4V83nl{tw+3c*I;LzKvr zkeFM%GV2HSj9g#0?%dKXpq-SidaQ*c8Zw$lN@^+Z>7-;+2v)-yhfiWEQ>m7#JebFu z^3!|pj#Ig1!Q(6GSpQkTeqWS!?ZEp^ftqE1felBv74AMJPVl*7TTL00b!1v}4h0nR zcgCW1>!S&GIKgUrHhU_Py0ccut~&iTyb_saJ{8~`ffD8n47(=}`;2a!j1BLFzrL94 zVzD7s0*5gcuF>#&RGS#ZXD@`r4A=G}d8;I2=ZvdbK~F|3HzeGMR(*B#SJ-1j5ik9$0D1z*Te zLOwIV$Gyk83CjlQm6%Bo`87h@GW2nT+*vQHKeTcxssECC$z|%k7Lz_atzi{_W^iQ+ zW`E=^{SJ|EbTXZ3zl;X~?wE^3c(@+isbE?_`Afhc+H$O?oynLY%~=wCU$}O13br*_ z7;^q?)cfqy!%^w~DoJEZap|JTp-wfALk515_<9)6#a5*cD$~0=s(7*c%uJg+pzYWm zEae>W28xjy@*X?l5EZ3*|L3!dnB$^al#Kmqwa(OLi9cRYK@1f2orWev zqmbzLjbV>x1&njL==rcIDntDh##9A9x;~6AX9VpnI|guW5pPkQEhK#0TX;>Lsc2O4 zzQfb}R|5hU3-U&fbh~D&fW&atR%QO$uytZL#SV-R>~(u)2vKBQh`-K?ZrX#Y!HjpJ zZs{|YoEyZHcqg`FjOpKt<`~P>ph6KsQGayZe|3TTkfyxdFcbzUlx;ZXAJ1XZg+$t^ zZc|ZdcN7unHn@GSbH-Tm)orEID%C8nWz0lAvp9jEj|poP=R1MdO@!}EKB#*MMNyU7 z`^5N|a~T&KN<7x(j;6b{CuB+=SC)&fp9M2Z%S~xp@JpWuXZ%KOE8!pE_4g2`f^kEG z#DeWQb^q{*BzByj{nTw`sTFCoWgn3;{@8waqD7)(FRZuF96U0<%%vr}_f=lRHmBkl z{KJG*!5ccrZfG3Gm)Z|jy$$F+b^*Q(!gr>$DAK7vyB)|Q>tu3FMV|6#QN%s;nam{@ zZPx!3WKJ~bo>sDxgC8&z-SimMT>fQj-Giuytksx9uOw?XcZ3OSlk6N^&*iLL4eYcx zEA9@YBG?L3m$#HJTk!4{dkY?_5%~3a{Jt?)&hIeM_uUT1$XfiF?D?(Wh57+xx>Tau z6?Z1f1!x6t)h`I&x5cx~OjWt9^HDCY_17y)u3KETU|Jd~5*BK5omdS9t59hSPU))d z%m^8Kqds>DzmG)jxH5~V1i6O#C92W07|}vwM{qhVI zu3IYhISf!M`daVF@i=ZNrQ)ci zh4+fqnCg9}7z3>JX^Ax{P0t)TTzw=dmOf@6^8EODtI;Pl2Sdc8-Xc4ZUWapExk?sF zobezVl7&wvszVd~?#}AVdXj@LU#dyRhAAKU*=w(fE%_g=JKFv7NmFt|xb{4d<)Kd1 z2J)WV7hcOhVev)FRA6x6P8Vw+sY_UU{njXL>nm5o_Aab?Du2BZl5rF}xelE;NwKJlNm+Oa-2KoUxoD=%(uSBDlsc#7wkxYN-ikjU?qc1L zaIhO8OXrYnVLq_nS+)CJgS?8oqrRdU)QX!dif;xkMRt=I#DG%$jvl;S%fo|BL;J+O z7Zsp7Nf$D^6JiNtrQF|IyIQ{E{*YTn#MsQqddII9xAM(42+a8hJWe0?qb99J=d!3t z3=ya$3Jd)JhEl{|1=y>|XEV%EO|fqTItI3;f(>xbCp!kyc~>9(ffD)b9FIWz5=+ot z2-@`QfxG0#Ei5qD^3MdIOi!ZvUHOC}%5zmY`L_>(@r{TJ|y_eL4^>i=K-{~m_9 z!y^rj6^*fD?m$4_?iu4mK8xo3E!WQ6cf*AQ`6Qg4kQ>8V0NAL%HGGmV8Lg9&GMQF{A%gcBw(oC6 z*$R1va*aZIOy@OhD~aC``A|B~=Ng3pIpb^CRe$u&oqo9a8U?c)-fP%8zgt9cqA{-x zuTl8FCojozb*r}?q@cOWvW+zlrY ztTovn<0AD3QvdsolPH`=noWnPHqrx$2JXWHu>XZUJ=DRTo0VShpjgEZ6z)gEgVV`m z^#7TdMIV>y$%UYV`Xe!H-N9r!osq+Uhm9G-@{KzrYzJkS((ujOBi;h-N1x~(paCJ4 z8x6~K$JccJcX_tV|n4 zRFlmfLJJWORrj)IuZfE=lMxv1va^ zoZ<%!Vt)q#!8WB*CLtS~n;Jq;Tl`O3^P1W8hrx3gjVxviJX}W7Nk%eeXeyXPbqg{% zw_%;>ow<1vm=ERa0)1hWKt=C+5lY&Pofy*aF;>o}yi$Xw`h|eR1cNU5^C4>W;Kom7 zdmYSMqScf#Ztpy${?eoe&>Na0XB}m(yKpZlVn}l*d+N)8mD39(8Wa}BJoF(!NxOrv zpLbf`uarFE!YJEgCW-6(;7Q){rEEy7;*Mhf#@5{F;tbVa3itIdo>ih}%NBBr_)KX& z3I$PkgJmc6-D!9Ak+DJ(UrD^i7Wp(TcUooDN}t=v=S=e#j^cme_|wObfgUw>Q+S_s z^0Qtl$QXdr?h@mXsH0>Cz`?C=bRg}%>x%qSdynL^4&a4;y8znkI!h4EQn!e;L(YFHUPL=9qqJF}uM-vB_PZkMQ ze9fKSfX>OVXJ&Q9-rw}1KCpk9*8feUo70IcOf*?(@s`7k&cQ;4h)rx%p zOz}cP4il2vZXzb#)|aD_^}9D!Rb?^pUmeD4D&44qH_AktayioQ&iy@*tD8VlDCT`C zCtZ*CR%RFzepcpdnZ2Fkho;p~em+4sqC@JJM?e2keih&U0&zWmcPp)}&{fo`y$`s*YA19g z{xjrtV+-8&O_TE2C6<gCISL9~lr+V{H&$u{`^+1=bi z!JpNf#zDd*(#e{0c0w^j(`>?U@)yXh-L!IJ)$g}>6R^$KWgC1I?HY+6)RG$_d`Ii0dLv3(BYQM&v19I;u zbs44e4$CdBH5QHuOP0f6y)`VG0z8|q3*hv1i(llDeKFAShJP{mkuday-_DoUHhB7+ zX;XNe9(xcs!e!Wtyoa?_AGY9&25RUHQNH8eKpqopT>Zxt{yXPzb?8id4}wkBbGi3R zLkm~c)|icS*!I0mCL$pUB2fPSun1=b=5nIN_rMj1Tze_D-2+xwCcah@ly!ep|KFJs z?4v!aS*N9;!kffpMIsJUvno_Ri7ML2*+22---*-eSD`pi8*gVA6E*uA+>1`A7h7HZ zh;}M`nIKuQWg6#?HcP?usJ$m3bK*m}k0GQk(;^|j5g)*gf86H3 zQ>`S~ATBefe(=Cl^4G4B*t7-X%oE6rmNH$#$xh@%=mmU^+ahF?RV$MpO`T2O*g!}b zZH^W0Pu#`@pC*5-oKwgX$1nK&^i%P=KRj>L$6VhNCm3T*5{~j8m{JS+sN*}>Y&n8r zD7!!%9czE<3nd2iEv~z4tTIK&2G0%5LKV9S6L3be@7i{ld#K(IUR0@HX>?26UpJ;l znVxX~Pcn9(U`(T(p|#x4&l)L zZVp+0KbaTC?2I_E>UU80OeN!9i3QcLbvH0lWz=bKiB2VJA&yV^RBk50)aiZcT8yok zEA7nnUzqvv$gy@QV=#=kA+PV`zO@sYQGM4F(HNnT0XVkALboeYpLFK&3eED#pOfEu zn7?#_#ma+Bo&RUqt=XB4)hWGbBX={9fCqBEe#R&Xtuud)$&%Pc72=2N$lyK@zOn9; zmXW2_>k7tdb(BG4&4(Ipx6)xfeen~<5%ce?_;*$wg%i=emC~QK2&?(wwUj#VR(o~! z6mODttLRP{aSs>0>P)9ywqQF^O{ttjYI4%>UgJ=?F#b-nrMqkjb=bTdPGfQewu;g+ zwJOh%4l|#4zi)T+a2awktqxw-8`FQz6>k(Bd+WNtV1|!fipXx4q0A4}*QnWX42=z7 z@4SU?Be`&LvV8j`hvN45dGI4V?$)m?cQK!!v9drG2>7u8R@40ttOliN?Pc6AzNmM; z?|%+2F`eXL8DWSBao~N@7qST8brfDZC3w3wddwcY746e4%A>;? z)o|+sg?FN6RqZS-5i-=lAsm7=5}bVELbg5BspTKilZk)V7JuB;_kipcTx{eSO_TRF zr*02ya`O*_MrouAN7J<)vo@Ay<{=m$OsSzX_@D`78qnaYD25KsY#(bIfD!x;QZgaj zv0J;sK-To>2GzhAq%XJVT)f^jMpyZ>I)0*}Q>Bb`5QlcN`yS{yh?*tTm9DvHigAz= zcA-|p=h9BTe14`({rZ&|IOI29MevMrFV|l`WBvsfI&e?!yDn>!EWP^#UZ#R-kUck9 zjv|nY_jeu~xD8s$2;nq8+Zij}ddXMNbV#Xcf@Ri;uIZmbieu|gQTayv7uiYd!y_F% z&WCi`t6UuwW;$rLFT|bMQ9%de{WmSAoIkv~q1zIz{YzzjVaHQ%!*}1-wkv}>3iY?D zpIqlNrT$xJh74~l$`eqqW!%`zJw)%kpteDtMPXF;-bI$gzxR>#)+^_jj4-6t#Qgwi zTe)w@o%)l9&&>Tykj!rn(n~AV_yx_z z#|@42RmB}+?rjXyU6R}%eRi}KrBu_Boo(0*Vn1x2aC(=MkS%;SA&jAJ4r96q^b}A3 z`UvBqlYW<<{f7}4nJPf9oWp-nPNL<@+DWeUAoFAT{~DjIFvZ>pta%}j?wk8a^&PK+$?cPsj2tal3>W;O%UAep zC!O4ssP~I)vhKD%S>V=R;&C;gdiqs!2RuDj{RJhPHl1v$Q@;a?Ql`wtuLdJ%@e$=` zMsXw3z)>>((BY_sl ztOg@?vfbf|w!JF;C#q5d$}XoWDgxI2{~5{px1;-affFyr24=3S2W8qu6;V&&PovVG z&{y%r(3)BGgMIb$xg--&H0V+xUwtTwB9xru4#u>e_L#Lc+TGnUlEY?&WYxVQ{33aX z6(sHcGBaJSES)Us<^A?eLDbs$;|NAKwZn|*z=57rIjaE1cMLb1e@{Is{<*S+=wMMu zw2kaPLEv)1X!tZc){6PMAN6YQ3sySvT}5atc1eb=z()h0@zYs zcs7kJ_-XsIeJR|_a{FCnfsp}Ccgd+19^hoNXZz39J{Yo{@J*nS+z$J z)Uv-NXO8M38`oT<0wE=Vg^#3JNvcxu8!Y z6Hy7qj*h#lG&Q^OSBJ!<;!o*9(bC~4V;1O09wxdSLlBa@KpJQ8?l^$CjM#L%b!_12 zVXLnYFKAP;Q(9t@&1j|ez3T!Vp(VttcO!o>4Nrds~^^M?&s5(aNY z;HCV18b(Q>iY_2gtarsJ*Z2tw7a~V^(Z2+N!8KSU2=YumjBJM3Qt|+YH=IH2? z{OsI7aiUy#i-UE8On;eBquQTUlSX^U0ql?oVD93(JUu5DC;Eh*QFPvXmZ~y=_yF8^74xGxeW+Q;VswtpUuGM^{jZxU zK;PJq!JHhe&;HtKA^hoN7wY!14tI7)J{weuzN*SN+4M<&A|X%e(lz~1IO>@G?PnpW zxE}{bKq4sTj*8wzvGF`wnO5o&d}d2>EMwMK8L1quqx3G<-tsS^T5l4i32A(Sph-o3 zndn78^~{Jp9Slz+#BDBY_4V7Bvc^7(EYQby{zKkVBWt1GjCu_+E-4XAMglriGmdms9lrqzFOK%G!wdNM1I3zz$`656g40% zR!%er!7%CrZaDof-jnLH^g%!}Z558RSG0y3{vkoC+aZ%dnXg--1(Lv5vj?c}>0}Z_ zKs-8PRo}wVVmt#uD4zutTd3tPX{!W|SXcS5R-czfeCHI&&tI)wDDhF8M^ybce8=Cx z^}fR16!rr73?bk0Id{_`GpTzLqg6x)1@p&&w5*~wtVzaZC&S5Q8m6A7+vf}pN&=NG5pgX)4VFZ+4c|P{U-;Q9 z`Tc`RJSE5Tp#^=4Y-AS4jex&=3ZYIlD?K67&R#uIKST1nmuU3iG&`~O{vw=Zq~5pY?;vh+!)xa4JSmvdQxw^O@39me zG-iRur98H4_hi&>TDK~e*|DVWXU|wrMavyXnBB6a}m-+uhW0=`{bR-KHX!v zD^t22#~w|DiELnLSh*3mvSm;H3oqDfhb&}x1`5aceFE5_OIL45(wr+FxYvxnUod9r z%{z_+7S0zs9%T_@C4;5kfE%!k1%9hqq|0-*(sY@zb!bw=yQ+Ge+;j*{ix)d>G>9!( z9r%6h7+Nc^(B`@D&MUR$6pjTh;PHA*P`r2;&Pxj4V3|*3SzYslyEFOKviv@9d{r6| zb=huUl+<|cOBQ<79`!DXm_P(%Tit`eZnjK%QneD+TzDx5r zuy|UUPLra)Ach*egZl|RF<8uWY4RdW66moBcO~*X`hFMN)1#B>&jogSwe$5e&k*F( z*lqr0&nxm{0k&7+t{msb;eNkQA&Klo0R}A8_h}!W>&WWV%Ve_D;J7kAAV(aQP&Q(QMUXteXf;pFW$u5r_y#C<)NEoB*By`->+56pT&r$mF zpy#iyd2arD#?(||kCz&M-!ct#b%3LSJYEH=zQ<@W46l_0ICYly`|iN1 zZ44IJ@?8&xSEq1l+R;lKW7ybo*gSyV4Sm|H0oYk?PO8ttP&>tpz@O!Bq3i`W2d&sp zkAE>_9=)82YHAX&xa4>P8;xBnvw2i=Lvg6_Ph!rsNnviIt2#qU7pMii;D-Q0?l*2+ z;QNTaI@jCuCA+6_R;28?BRhYjP;MUkCcKs!ZVCcueDg;DOsMv`@C^2`^N>Bpbhq41~fX~JiU;P|m#(&Hf3d%g|*`8hOC6RQxbZJ-w z35N4V1p_L3!p&%(>S_B5JQ1ayltPdf%H zwClMF6HI3V8uvTLg<6p367D8Ao&oQnP`@aD6<$(q&SPQPW*S>?(Q9b~JTjdPo7=rq zwvD`s$j{_1{MS^)9V4NtQkNDEibiM89VpW%q*&xz*`>`DY5qxQ<|fDHS5q^vXo>)4 zG5jvu;d24a1V@8jnxBtPL+0557)yxl6t~Y8%*fOpelZMe{;B*e!LclMn(pZ*)7hFW zSaH|azA{gypb=u5h1qqA=Y5b;#p9Wd9vly!nJ>7&t1?j^k384aI){n3=Vk|XfH!FZ zB=;BMWM8rY(EAY3B__pu=1&*LdjrENW4=QZ@%j7`c6JkmIVl`VvHKx33{;k1YrG7y5n*H#ER4P zlhtK_bZ1%-kR`755o>ouMkH8kjPq$}f8xu&FKhrHck_=T`zv$&*VuEl-$uI)D57RH zE3m@e+BwMc(&Iex$m>&%c7mfJg4~imfLz@ZoC{(vp32z(?WxDO3TAGK8`+uo~v)1fQYdGc~pH%oI5^!67_j!*p#!6yf137 z{Edbacuc)F>a*ri^=9p=BXq=7%+WIqp>)^yk^mq^F&s; z(GcLICIXT@@JM|uFR>CnY$|rXJCCq2Eh+?!{eW(0Yrzj%LK7?XU4JT$VXL3ZDg!Jl z!t=G8*|7;M-b@6LgTOhfH}!e;5X1;eAwcRNc8i-quUfpQg3o9pgi{=VAZ9uin4brK zXSjL!MD*%YbEX@xM1DGexldaUV1PyJ6<_1xJ41s51gtOO2v|}G^j=)P{POFa;k0OA z8>HB;8^JK(6|*tnsr{yKo1>yzeGr^$g(F6g&w6$GdKLgQTZYNeCme|ouR5T;CA25s z8S7t8mIr!lmYQl6T|- zSOF;DdM=<`ZqBaHBc&$fUAlP!AXdCwh;TopPn^KAU)js*;jLKQ^(&kaKJ$n?mAcR~h$mmXC84Uz4U8ZLg?aLbcgzSwhk zsGK;Py#{bjnnLvPQU|F)c+aH|uEz&Pt+5RVNF=ms2Yos@-`(v?aeA_;Ug|PrQX8Kc zEWF(FMh`e>`e^RxRbJ?Nt+eb-2SKLA(|cu2mu?;pLzUNu5tYf`lhKaTidu#59ce#G%kiVZd6B{AirL_kVU5C z!y(MitM_7k+?L&_%b=i!fS zo1#&EO#hUG6PT1OkYn|UXx`raNcmPr4SxqEY)nwZa9-iugC}nGvPsQp1&JcibH6>u zCTr;r_2@5Sq-9~;JsP1i)9M=e5;fQy z9;wBgxp>cMCtIIGl5bwB*3jlTL-)vPIA=>(?7j;AFIs{4DhS(_FhonFdti&cBYFiD z*W*+3gJ4{#-g8 zK165PP|&&LRyb)`BO`@<5PVhuF!9iO78m~$-s*>Lzd0$IRdyphC26)DU9%&P90nA0 zH2+18*a+G~-;(eQbCYGQa%SDq|fe6gc7EN`A>8UFW^kk2-^ViHv)1Y(7jC4AGTA0jBaM* zYOxtbTuE)HW5JLtJ4Z7?kVSdVl!58l~Z))vs>UQFuA94d0eA9Xm1#IG{|r6 zWqtq%`ly7MCbObF`TMY{{5vjKJEY{4kCD{MQ$4@v{h+oJ7>HufFrqfnUIj;zAeZev z1`o7Q?7lVe<-~yGy>izL9JkjIXE3fNQLhh`8C(=WcjAw5JLEKuP?XiTt>Bw}lv@Hn zTm=A|D3v&q;;31Sh8CZVXeidgX&9~mx=Fk6Gh5Oc(Ax zc&#}F5@I~m@kjCZw(R0`E7tO%XS(&@)T=~E1!UaHswOaHL@N`pg@6oY%*_K$>Nc}B zpd$4V`*S&)@AOkRF@WP~uN)HvH&n7;CQ(C%?<=FWU|i3!=UYN4$Px-N#FB_LrCmWa za!$e`iGpqDB~nI(8L9s<8&@LY5Y(@kpJKSU1aM?AuZ|DL3hNMe!)j3Vjj3UX1{eq%WP-IZggSK4;`zgnlq7kRSamsw~R&5t>@+4=kQzwIfD z3{G|5*UGd!eAGFnH7Yd#i7Y-Zlf~P7`XfYm+Av)Y&mxdkeS%#OLl?^1KLtn{@FX@i z5&rZjFXy-ky~Kl72=|`8aSv653{Td!lRY;7DQ;KkRfh#mod27~*Rz)36SNp-HC()3 z{9s@^^@93p4oe>(Ot?0o2wv|6=ZBuXZ#x3)YFdXOs~ai)sIPsyDus|iphrn4ae?~j zH!0gdTy2pd5wT;f&SFRIB#BM^h$XdxKUDEfh{}t(iPf!j9l0Nng8)vlzns%asCUYd3XP3$rDSkcUCey~YN90t?TSWHllmG4^em z`&GS4LKV>0-NM_XzI3igTLHMnL?!w{ZBO^QLc$#!#mWu0Z3!Nwo`A2Q?w&JBZ#cIz zv66w8CcM_`0kaufi5dh*>nBOO%MH;lr+ti^kQ4OEKSosYd`Qy_rtAgQ+Us3e=dJo)FU{BxuUtju;z%Y$BYu>|SCp_-A4W|e?P{g1;- zR1Tk;9X)c^5CM&}ZRRntv3+yzyg+PC;_&_8>}ML;$oav;wse?tXA$^=xLO7Ar=O%0 z!!sW%{r=HbpYfLLzJS_?y%pGfQ!x4i4yt<`r7L_g_H5=uQ+Z=Oc+6_8svJ&Y$g=j_GU(KN`O#30P~%5(AX$4MhyU7p2_)bMjwTvJWA7@$ zTM~~{NIG+&*vJwNFXzqYA{o<$fa>HF0&OxLb>EryPR47!9lWAtv0#e(tfc$uTbozP zol%jM4hjr?sVwB zfE3VaXu7BDxjjy7C5xi-A)V1ESXNrBpk;v@8%(?wFfs$o8A5wP)m+#_*mWN28dR$u zn{mg6fr<>!tbP&`&1RD*tw9wux?Cc;SLt|sOPr8hW5%#PYC3%zKRi|yJYGE1KCq=b zaPyBbau(jmP#g!D2`$%5C?1R}8w%!;O6l{!AmoOGsEGZL#;p|UrU6{89-YN4l7T=dRltN#BjLZ2>Cp?}<;03X-p#7+QlUyw`D6vHN<`>VhfNQC zPzN=LZjdmpL#KGX9G_`e%7~gzjL(>xH=yNktX85PRmM5#Q=KdeJ-=SuJR=Z|34ty`x!CSh(j#Z6=l|Ns*myD|2@v zM)9^iiQ^OM`qF8q&S)ry0#|CfLl|oRdk_C5HP(~cG6$ISfzraOd~==cQE%}HGbQp^ z+6g-=YUW5ZE*M(HA+nh6Fw%*8VkJ~S{_I#e%g>4Nom=DbtMa-2DMJ)C3__l1lV~0sK#0Mit98Q)%V?e^nyCTzVp-}o-@0jMj zh4tzB)E?roy7_FEK_(EMjC;ePN7Wk*old--5g;BzfvQYPeZ95IteELvkE3;YbS@Zx z0(a|&@}i&);yV)oRhQokvw*acjayP(>cp37_y-yv+nZaeLl~7rVcZaO7`ODR1I|L2L)nD5AbpQaCMSLAI+l_$d z*`;ehHIxP5S}1TansZ+M0jMWU^)i1Ciw@;ArOceV1J!=9<4UyYPi~+R@B!v7=gq$d zrk0ely8?v(E0Et_Jp&NbyoemPIy>JOh5!}R4EdQRP|)Bo0c$ch@;!I$vBirFcnGK0 zL9oR^xIky5da+%TM zY_6#Tq^mWt?t2^?1)gU@Thawx8;+3>4+ zlOeOSUa@`?0J4&8UTbq6SrhS;b?UosLMC7VV_y+%mZCny(CNR1coF@KO`r}#42_Ll z{ptZe|8m`lLJZ>lo@c~#6bf{i#{z)*(jVTWFtzBbez|c>^3Q@41Qd2)!DT#m(!aV| z@&1Eh!P3R^%88X6o92@DOt2m9edYp^^$W{Qk!E_#Jtwn=iZMb#;@;8c8x$~YjsC|!Fu-Iuj)c!F&w;xmdgz_%h_DK z*Y<|{fHr~vb;yjx0_)5mR~=~NjyYmxFks8R*b!zjAgEWh>MxUds~w|t`|7sN3qCcA zghehDfoa*`#5(iwlvoRNAv&SuIb=z5?39;qt)rsd?NYq7T_FWo1jOC7+k+aIj4cWs>RY$?z2M0z(QgX5 z0!k`E>e{dvv+rw{%E>axxdKISw)V>oi#?`N4SiQsorSs33w&*1*Sjb)hrrnt*oGA?UaA@XTP^fOb5v(&JccoNVmE{ORKF+%ao)oA$NU9 z`NwYW-c4?&*A^=~C3P<&o4uAc)(l*K_Tze;SMTL(sX(;VfyhIlK4Wn@fa|G#PBv!; zx3O)(o|-xV`aSx}ZU8YrX0m&lpxf1}5$pDtjW5_T#3AkzC(-`?#xU#XG?hc=te({~ z*DGDYM*Uz8DlWo;O;wHOK}8OI1Y)OD?#h4 ze`(k?$=-cDlTe(X*2B9P$TYywtqhsy7FfyeI~5v7$N#DH&VW>==yrUL*{jLb4#wcT zwSYCT0)I`+0c^4qAnRt^dj_p5BCZT@yS$T1->?CzA@0aj8^*Kj+L<_gks(QeQ;f=q{#ci#yz9kcZxN}*wS5Ga;g zU962m|FHN)I+@2Plhb{JY$-K>=~;2)k2;m)Q@%o) z)|+SYPeb?T?0MaSY^Y>|L$PcGq5tA``Z8=Gm#dzWLpH5xBfpPY`JA9!K$GXej5YAmPOISO_P(?;nKpw6wc>Odw zNM{KZ8;@Wmi}=7feS)EcpqE|@-b^f^ocY+)c2|BTJ?F(|6}hpeHDP;F+-YTQyYW1=3rH%i zKCPGgZuV-#OP90tTgrTX%ELNYXL8Vp>@T#G7$w@HWR^}gVu=>Us%0}pfz&QoAc-&-mASSRr*jV2ALp1)P)Zak(J#RPox zpti)T_=Q39D#O(Uf3OXM6#f>yfuUc3B&M#CUc%L8;cLReg7>mM%pG*JR5mfxlhOr* zx!H1*E*C!;;R_!i0Wy6{1WPsYW9fHd2qgs3m4VyN*E&cpa@~BkRP+QgfiKbuT(7jo z^gbP>B6&4d{$oHu%GYPbk~pvVq^FJVTy;=b4QO$<#BZbB@hB?gj|QA*(Hcy>wj#W@ z8q&CQk$Jx`p;8k_qXWue;+ViW&b3*W8@PV*Yub756FdLPX+`EEc82S@KYj%>IE~Ip{ivGl=a`TKir~!yW~SJJ>-Ari zCTw36WyTR1tDE6T7YEp>@OMT_r~r51I$hGYcpP9Mn}Q8&|0s&lyu$n5T*+B6;ft2a#{U z20-MS5eiss6q;zwF;DVJmB5kc?B~?H|GU$`YW!aJxdnf+{ws3jAlr<782!)cEk9}h z{lm$i_V-K8XEIAPTFAiYtz`r8@L!&$3($%iiiI@c<4sC0&Vaufrc88-%QEsmt#FBXij za6@|a;SaWIf87{wTQBebmYphfZ%4tM7v<$~K`@)^@4&;2^LL!n0!|=t@{s`_2x>T> z6H_)MT$KCFAkB}z&XmrKi(Y%C>muF$T;VA*5E;mWxpFdNU+ct33%28;FVqh;@rBX= zUmuP&3PxM&f9~rZ`r&8s?Wy4qwK6cldz{VW`_WS%Nt-5FjzOB+$`(nhY_&{-04pGQ zb6d!EoghlH$-LJ)m^EzfRXO7wx{rF}G9M~DT zLX$3zgK6wPsz&2lHZ)DiMbQ!7dkWxY!C9{qypKR#FPKzI@%ZNYCb<-z+Fhha-Mx>X zBE_^8n}cqG$9Ch3^!{FN0ZJ&-+qWM?(^v)yz9x#TH!11yxlbo`&XmBk>|?TH<6%F> zrXx_}kWdTVsXf~?>_xjT`O%!ro%!MVKta|D32~`$#MwFd0=wiAGAV@MvjI zaRUspZ^vB15g}uolBNh73wDTI4(l6L56!{{qOXgv5xI;8O!BJoDyLyEK7%>6?JYlY z5{_`nh-sbW>2z_^HlroH$z5iiius>E3)k7+RP`Du03X}X zJni?`=qiq~4aQ^cinGji7KKgjnv0*SkIeT< zbC$*r-z$v>5^$VW`YI!<-jm5|wQHN!lIa+13VjMp)X02w1jh%_m5o$__wmJtC07{L zrT7gGk3sDGpn;>NEsCfZvOnGbT>s^NNh69hj^D1Os?b;%U+2TwjMo_TCvUQE41uA%5oGkC{>ef54V}Mfr8Ag(!E>fK*K!9*PrS#K|ea z)jT8TGMOb_^8c*SmPjVF==&{>##U*QPNVobx+!EI zQwrj`!0ZU+Ynp#V(Mc0S0i!tKL1f#JG{NV6mAUt$nVVGYjuSMpt6YaxX6KJ{dI%ta zQRgP5+cP8)w;-^0M+}R_{jpl|vWWzXc8{KADZ|al*mwR~Q)ji3H>~TNPBUkb?%q+{ z0~@Loe!gX5?Q(a6$p>y8w!nmwWdf*Cnjv&ff2ITLOItBfN7T7a><9VpkOV#@#WXa zGkz`>qT8#43t2l+7fp-JQe~vi!ji!(8+h+2p;_J@#eT8uUxHh{mweaVf-|$qP2w+V zUG3{oi}O9Q7ss5Q7YA%KnJf`-sTzwXr{6ZzCB<+1yI!6{g-gb9Ri&%Hk}xr9l+syF z*ZDj?=awxwIcNrl6p5@*gw{Dux?kqiHZ)~$950DWNEMaP3DR+dGNc>6+OFSb;lHRh z$~2yOtu0b8jd4A#rjNeit^9t=fg9&lV-$cUTqEOOQ#E+QEbwa#9Td(>@Uq)J04T=f zcF04iOQk}HmErz?>^WNf{H0hVSKwRb#9tZZ>S>bv^-{-A+Yr?fDHiR+7x~vPr^mE! z!yp1Kf4gKysQX+D5)!n0@rsdq8d4?h0Dd+?%Nkt(Ug*JkG<8vF&%yUZ06}+bU!zIVZ&huD{0ELcZs_|oF54H$& zy~bR9jFhUM@At3`>*8>>sw|;bu^$tUWp)d`U0@nlt(frNyQiLhxD?m$T?~mgjbAYX za}cLbAtdPv6{SJN3qnxgp9Kd?Cp%_;DOd(tiw)(uFl-zloe${Te`S}fER;YXs`^<( zwi|A&&R6$DoMft-hx4HX)xugv zgx13hN*bQAgN5nPO_|?3EiuJ6feETWy(tGB?yIu)SE!#1)@CRwyndaGjz&3gUbzbO?2+D7a{wZXBTGiY{zS&-I+8FW}FOeiq{i(q! z_>Xy*XD&g2&$c@C=N^QKFY!7$9b#} zQxBIzj0`w@WmoSpr1w~NF>&TjD;viD}&SnHbKG+$VP z1%59f%E=wgqP{$Je2_K;$}i`y)9LuwcSTN?s0m~UZkd~NJoFSDg3V=c{_r#O#GFKd zOMI7AaXWtK_!I2N9b&l)Z>Dws&GU7HDJ(nzbZ#c_F8ZbleIEfnrme(@&27UqvOTR#=TL_u4C9xF~-J07S$Q}fy8;| z@lUId=i$f~mSzllY&>}R;onF}f7&l}QT5z{4TpsK*Yna(Rqs0mxpBL9WPJ37D?BVX zrXH;GG>N1Go=g>JS|%!r*t(=iab=kaomNc0#GBM1rGrEB+N_VBi~gJI(TY^GRX*|# zhYhJMq?EEAOEZZ2MbnF(=)kL-BQ>3Ohdk0v*0dYJD_&G4i^i#Bh8H#?FehnqXOL9m z)E3@Gj|{Gb`nd{eKepWkz1Z7DJ_E;3Ie#&Ub4S>MxFO;8&-8XGJ;Il#g{rlI=-?LR zVUlpzV<(0k>xkoF2C|6=3kRN)1xTDq^iG!i?f6-YK)dq@iR#~;)@vgXbSJN;ne%T4(ck0Mcuj1)HW}y5B$X4=N!z;XN{^+7_D#3&m8# z*!$g(%Te6fB~@19Sn3#i09Hx5;1o6&VyU*t4@9~r=_=Ty*9_s7a~ld~q8`rN%G{am z_5;r)s)FyPQuYH*t{Gof%vZvgqOF{s^e9_nKpZI1<*sf5F1KeRu9Z)IQ_z*j;9|dL zRV9>@AvXou-8u7^b+rQ>7p0WBLlNRY)!H_fa}$w`P}AXa{hh1pZ#{S|eMa;AiudBp zc|)OovLOlza`6{OGG?BAtB)5)dA)v(g*zI#>Ifc-Ns5IN-hZg_G8t(o(x_crE4KWy zqe^lfnaz-~?=S1>lKhsOl>ME0$3G%3?Qj;}B-l&}U-S49auDGR=)*=4x|| z=P+C{YBgg;fXFM#JurS&+0T|hq_p}TM&@;WV5o)YYQzn~P##X16+|;iTNt5c~b(2@QHA63d z`Pr~7L2x1p3v0achF?u|+y)wW*RhfAx_LLE@ipO!^c;X^uzBQd)U1uLU{a&3D;x*S zdb5r3*#6JQHg{E!Udo$-0e-3vWu(Y|Qe&-VjNqx~FJ>#d%2urBzT&qb`bmFD#%O|QJO;sH9Aeihc8+-+U3@!jtP-2ulCcAAe#^CJr46E6 z^)%JZIVgCA4zaW>P#Pwd=4TR^ZRGRBB!E_`Bz4`pj1ln$Ru|j%sVqCTQ{EII60M?A z^Sbd#488U+bGmgA6<&Ulzu<_^UUBvq!E&|#I~e{c@Mz8MBkKL( zZ{*XO7gVq2Jkh~=1Zua3^+|xw<8`)q&?PRGhYkaVjm`?igXw_18?&2wC5EGg?ipv$#^4u z>Z>qwh(HH>!EItvO}R2}hZp9@bDND-Rh6et1BC!@-Upzg?whORdxXtq`&MysbALuC zf-i%$0o=!*MAJAKJMaq*J28Agg&)N#^`%z!HpUjEauzrx>2@r%$1jtBEAcFwRnnXn z0~+U`86Lw=xzQ6c&CiDBJgq4fui3bOVrKj?&g;JL<*j#EU zj?=mtQb=Lr^W3#RBwjtx$x#7Gy2a@#_@w=fV!u~!bR*@K=2rd4OZWuM+m73AN2jr_ z41pK&l8XkL*~ybwyb(aPGxLg|8|>$^GLmfgizn z^+r^b#f7mxdt6a%S+a2p{`7$#szf!3@y(TCpBe)W|Y)8 z5HH)hIln3_#cJyxGwi(Y0M$GECQQ&AyXZT;+>alh(7^^XEj5OTzKGg5^JChS{&8mI z3#l~o?X9E$MY|WR^P#Tl_izHj>7w_zc;a?$c@6~tJ?tVK8RN$@Am(D5lINdKN*}pU z=7C2Abl#{Q6(Gx2DT}1Cime=*0Z2r)-&hykhBu?@6AJ7k2ytDVoz9LBpPU)1!q0>>9+?Sxxy(5AiNbXvXDcrikWPcP)3Il2Ox#cbB=- zJN7wT;%R4(BK6GQCtlw~;V^jv1{@vC+R%ZI!&+dlyw9i-cakSSMiREmE(EuN=sj-t_!7uGV zUD%<_ks_fOT<=(yXGIFButJ4Lk*NCcK|)N%y-!#kYHRFI7Bq#Zr*qf8g_DTSlYLaofQHw6s5W5DIY+^_ueY zpz!^@Ag9x=@TX{8o6jLcj*V}1<6HhX==JZPd0QSEbQFx;Y2*2G0r2;Shx-QlZrt>b z(jk{34>}WigiWN1S9NrK8c%$&hAOIIhVx!)+TF& zE`G-^DptAjw=Yk1;US^wurts9qJNLtVV9qVYVS;X&A{P*-!}b^?SQy{ZHgZpjCuV& Vp;=PHO>cF?U}9*FdZh0h^Iuh^t_uJF literal 0 HcmV?d00001 diff --git a/examples/nas/proxylessnas/datasets.py b/examples/nas/proxylessnas/datasets.py new file mode 100644 index 0000000000..b939005749 --- /dev/null +++ b/examples/nas/proxylessnas/datasets.py @@ -0,0 +1,188 @@ +import os +import numpy as np +import torch.utils.data +import torchvision.transforms as transforms +import torchvision.datasets as datasets + +def get_split_list(in_dim, child_num): + in_dim_list = [in_dim // child_num] * child_num + for _i in range(in_dim % child_num): + in_dim_list[_i] += 1 + return in_dim_list + +class DataProvider: + VALID_SEED = 0 # random seed for the validation set + + @staticmethod + def name(): + """ Return name of the dataset """ + raise NotImplementedError + + @property + def data_shape(self): + """ Return shape as python list of one data entry """ + raise NotImplementedError + + @property + def n_classes(self): + """ Return `int` of num classes """ + raise NotImplementedError + + @property + def save_path(self): + """ local path to save the data """ + raise NotImplementedError + + @property + def data_url(self): + """ link to download the data """ + raise NotImplementedError + + @staticmethod + def random_sample_valid_set(train_labels, valid_size, n_classes): + train_size = len(train_labels) + assert train_size > valid_size + + g = torch.Generator() + g.manual_seed(DataProvider.VALID_SEED) # set random seed before sampling validation set + rand_indexes = torch.randperm(train_size, generator=g).tolist() + + train_indexes, valid_indexes = [], [] + per_class_remain = get_split_list(valid_size, n_classes) + + for idx in rand_indexes: + label = train_labels[idx] + if isinstance(label, float): + label = int(label) + elif isinstance(label, np.ndarray): + label = np.argmax(label) + else: + assert isinstance(label, int) + if per_class_remain[label] > 0: + valid_indexes.append(idx) + per_class_remain[label] -= 1 + else: + train_indexes.append(idx) + return train_indexes, valid_indexes + + +class ImagenetDataProvider(DataProvider): + + def __init__(self, save_path=None, train_batch_size=256, test_batch_size=512, valid_size=None, + n_worker=32, resize_scale=0.08, distort_color=None): + + self._save_path = save_path + train_transforms = self.build_train_transform(distort_color, resize_scale) + train_dataset = datasets.ImageFolder(self.train_path, train_transforms) + + if valid_size is not None: + if isinstance(valid_size, float): + valid_size = int(valid_size * len(train_dataset)) + else: + assert isinstance(valid_size, int), 'invalid valid_size: %s' % valid_size + train_indexes, valid_indexes = self.random_sample_valid_set( + [cls for _, cls in train_dataset.samples], valid_size, self.n_classes, + ) + train_sampler = torch.utils.data.sampler.SubsetRandomSampler(train_indexes) + valid_sampler = torch.utils.data.sampler.SubsetRandomSampler(valid_indexes) + + valid_dataset = datasets.ImageFolder(self.train_path, transforms.Compose([ + transforms.Resize(self.resize_value), + transforms.CenterCrop(self.image_size), + transforms.ToTensor(), + self.normalize, + ])) + + self.train = torch.utils.data.DataLoader( + train_dataset, batch_size=train_batch_size, sampler=train_sampler, + num_workers=n_worker, pin_memory=True, + ) + self.valid = torch.utils.data.DataLoader( + valid_dataset, batch_size=test_batch_size, sampler=valid_sampler, + num_workers=n_worker, pin_memory=True, + ) + else: + self.train = torch.utils.data.DataLoader( + train_dataset, batch_size=train_batch_size, shuffle=True, + num_workers=n_worker, pin_memory=True, + ) + self.valid = None + + self.test = torch.utils.data.DataLoader( + datasets.ImageFolder(self.valid_path, transforms.Compose([ + transforms.Resize(self.resize_value), + transforms.CenterCrop(self.image_size), + transforms.ToTensor(), + self.normalize, + ])), batch_size=test_batch_size, shuffle=False, num_workers=n_worker, pin_memory=True, + ) + + if self.valid is None: + self.valid = self.test + + @staticmethod + def name(): + return 'imagenet' + + @property + def data_shape(self): + return 3, self.image_size, self.image_size # C, H, W + + @property + def n_classes(self): + return 1000 + + @property + def save_path(self): + if self._save_path is None: + self._save_path = '/dataset/imagenet' + return self._save_path + + @property + def data_url(self): + raise ValueError('unable to download ImageNet') + + @property + def train_path(self): + return os.path.join(self.save_path, 'train') + + @property + def valid_path(self): + return os.path.join(self._save_path, 'val') + + @property + def normalize(self): + return transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) + + def build_train_transform(self, distort_color, resize_scale): + print('Color jitter: %s' % distort_color) + if distort_color == 'strong': + color_transform = transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1) + elif distort_color == 'normal': + color_transform = transforms.ColorJitter(brightness=32. / 255., saturation=0.5) + else: + color_transform = None + if color_transform is None: + train_transforms = transforms.Compose([ + transforms.RandomResizedCrop(self.image_size, scale=(resize_scale, 1.0)), + transforms.RandomHorizontalFlip(), + transforms.ToTensor(), + self.normalize, + ]) + else: + train_transforms = transforms.Compose([ + transforms.RandomResizedCrop(self.image_size, scale=(resize_scale, 1.0)), + transforms.RandomHorizontalFlip(), + color_transform, + transforms.ToTensor(), + self.normalize, + ]) + return train_transforms + + @property + def resize_value(self): + return 256 + + @property + def image_size(self): + return 224 \ No newline at end of file diff --git a/examples/nas/proxylessnas/main.py b/examples/nas/proxylessnas/main.py new file mode 100644 index 0000000000..a675cc7231 --- /dev/null +++ b/examples/nas/proxylessnas/main.py @@ -0,0 +1,105 @@ +import os +import sys +import logging +from argparse import ArgumentParser +import torch +import datasets + +from putils import get_parameters +from model import SearchMobileNet +from nni.nas.pytorch.proxylessnas import ProxylessNasTrainer +from retrain import Retrain + +logger = logging.getLogger('nni_proxylessnas') + +if __name__ == "__main__": + parser = ArgumentParser("proxylessnas") + # configurations of the model + parser.add_argument("--n_cell_stages", default='4,4,4,4,4,1', type=str) + parser.add_argument("--stride_stages", default='2,2,2,1,2,1', type=str) + parser.add_argument("--width_stages", default='24,40,80,96,192,320', type=str) + parser.add_argument("--bn_momentum", default=0.1, type=float) + parser.add_argument("--bn_eps", default=1e-3, type=float) + parser.add_argument("--dropout_rate", default=0, type=float) + parser.add_argument("--no_decay_keys", default='bn', type=str, choices=[None, 'bn', 'bn#bias']) + # configurations of imagenet dataset + parser.add_argument("--data_path", default='/data/imagenet/', type=str) + parser.add_argument("--train_batch_size", default=256, type=int) + parser.add_argument("--test_batch_size", default=500, type=int) + parser.add_argument("--n_worker", default=32, type=int) + parser.add_argument("--resize_scale", default=0.08, type=float) + parser.add_argument("--distort_color", default='normal', type=str, choices=['normal', 'strong', 'None']) + # configurations for training mode + parser.add_argument("--train_mode", default='search', type=str, choices=['search', 'retrain']) + # configurations for search + parser.add_argument("--checkpoint_path", default='./search_mobile_net.pt', type=str) + parser.add_argument("--arch_path", default='./arch_path.pt', type=str) + # configurations for retrain + parser.add_argument("--exported_arch_path", default=None, type=str) + + args = parser.parse_args() + if args.train_mode == 'retrain' and args.exported_arch_path is None: + logger.error('When --train_mode is retrain, --exported_arch_path must be specified.') + sys.exit(-1) + + model = SearchMobileNet(width_stages=[int(i) for i in args.width_stages.split(',')], + n_cell_stages=[int(i) for i in args.n_cell_stages.split(',')], + stride_stages=[int(i) for i in args.stride_stages.split(',')], + n_classes=1000, + dropout_rate=args.dropout_rate, + bn_param=(args.bn_momentum, args.bn_eps)) + logger.info('SearchMobileNet model create done') + model.init_model() + logger.info('SearchMobileNet model init done') + + # move network to GPU if available + if torch.cuda.is_available(): + device = torch.device('cuda:0') + else: + device = torch.device('cpu') + + logger.info('Creating data provider...') + data_provider = datasets.ImagenetDataProvider(save_path=args.data_path, + train_batch_size=args.train_batch_size, + test_batch_size=args.test_batch_size, + valid_size=None, + n_worker=args.n_worker, + resize_scale=args.resize_scale, + distort_color=args.distort_color) + logger.info('Creating data provider done') + + if args.no_decay_keys: + keys = args.no_decay_keys + momentum, nesterov = 0.9, True + optimizer = torch.optim.SGD([ + {'params': get_parameters(model, keys, mode='exclude'), 'weight_decay': 4e-5}, + {'params': get_parameters(model, keys, mode='include'), 'weight_decay': 0}, + ], lr=0.05, momentum=momentum, nesterov=nesterov) + else: + optimizer = torch.optim.SGD(get_parameters(model), lr=0.05, momentum=momentum, nesterov=nesterov, weight_decay=4e-5) + + if args.train_mode == 'search': + # this is architecture search + logger.info('Creating ProxylessNasTrainer...') + trainer = ProxylessNasTrainer(model, + model_optim=optimizer, + train_loader=data_provider.train, + valid_loader=data_provider.valid, + device=device, + warmup=True, + ckpt_path=args.checkpoint_path, + arch_path=args.arch_path) + + logger.info('Start to train with ProxylessNasTrainer...') + trainer.train() + logger.info('Training done') + trainer.export(args.arch_path) + logger.info('Best architecture exported in %s', args.arch_path) + elif args.train_mode == 'retrain': + # this is retrain + from nni.nas.pytorch.fixed import apply_fixed_architecture + assert os.path.isfile(args.exported_arch_path), \ + "exported_arch_path {} should be a file.".format(args.exported_arch_path) + apply_fixed_architecture(model, args.exported_arch_path, device=device) + trainer = Retrain(model, optimizer, device, data_provider, n_epochs=300) + trainer.run() \ No newline at end of file diff --git a/examples/nas/proxylessnas/model.py b/examples/nas/proxylessnas/model.py new file mode 100644 index 0000000000..ee32970d7f --- /dev/null +++ b/examples/nas/proxylessnas/model.py @@ -0,0 +1,131 @@ +import torch +import torch.nn as nn +import math + +import ops +import putils +from nni.nas import pytorch as nas + +class SearchMobileNet(nn.Module): + def __init__(self, + width_stages=[24,40,80,96,192,320], + n_cell_stages=[4,4,4,4,4,1], + stride_stages=[2,2,2,1,2,1], + width_mult=1, n_classes=1000, + dropout_rate=0, bn_param=(0.1, 1e-3)): + """ + Parameters + ---------- + width_stages: str + width (output channels) of each cell stage in the block + n_cell_stages: str + number of cells in each cell stage + stride_strages: str + stride of each cell stage in the block + width_mult : int + the scale factor of width + """ + super(SearchMobileNet, self).__init__() + + input_channel = putils.make_divisible(32 * width_mult, 8) + first_cell_width = putils.make_divisible(16 * width_mult, 8) + for i in range(len(width_stages)): + width_stages[i] = putils.make_divisible(width_stages[i] * width_mult, 8) + # first conv + first_conv = ops.ConvLayer(3, input_channel, kernel_size=3, stride=2, use_bn=True, act_func='relu6', ops_order='weight_bn_act') + # first block + first_block_conv = ops.OPS['3x3_MBConv1'](input_channel, first_cell_width, 1) + first_block = first_block_conv + + input_channel = first_cell_width + + blocks = [first_block] + + stage_cnt = 0 + for width, n_cell, s in zip(width_stages, n_cell_stages, stride_stages): + for i in range(n_cell): + if i == 0: + stride = s + else: + stride = 1 + op_candidates = [ops.OPS['3x3_MBConv3'](input_channel, width, stride), + ops.OPS['3x3_MBConv6'](input_channel, width, stride), + ops.OPS['5x5_MBConv3'](input_channel, width, stride), + ops.OPS['5x5_MBConv6'](input_channel, width, stride), + ops.OPS['7x7_MBConv3'](input_channel, width, stride), + ops.OPS['7x7_MBConv6'](input_channel, width, stride)] + if stride == 1 and input_channel == width: + # if it is not the first one + op_candidates += [ops.OPS['Zero'](input_channel, width, stride)] + conv_op = nas.mutables.LayerChoice(op_candidates, + return_mask=True, + key="s{}_c{}".format(stage_cnt, i)) + else: + conv_op = nas.mutables.LayerChoice(op_candidates, + return_mask=True, + key="s{}_c{}".format(stage_cnt, i)) + # shortcut + if stride == 1 and input_channel == width: + # if not first cell + shortcut = ops.IdentityLayer(input_channel, input_channel) + else: + shortcut = None + inverted_residual_block = ops.MobileInvertedResidualBlock(conv_op, shortcut, op_candidates) + blocks.append(inverted_residual_block) + input_channel = width + stage_cnt += 1 + + # feature mix layer + last_channel = putils.make_devisible(1280 * width_mult, 8) if width_mult > 1.0 else 1280 + feature_mix_layer = ops.ConvLayer(input_channel, last_channel, kernel_size=1, use_bn=True, act_func='relu6', ops_order='weight_bn_act', ) + classifier = ops.LinearLayer(last_channel, n_classes, dropout_rate=dropout_rate) + + self.first_conv = first_conv + self.blocks = nn.ModuleList(blocks) + self.feature_mix_layer = feature_mix_layer + self.global_avg_pooling = nn.AdaptiveAvgPool2d(1) + self.classifier = classifier + + # set bn param + self.set_bn_param(momentum=bn_param[0], eps=bn_param[1]) + + def forward(self, x): + x = self.first_conv(x) + for block in self.blocks: + x = block(x) + x = self.feature_mix_layer(x) + x = self.global_avg_pooling(x) + x = x.view(x.size(0), -1) + x = self.classifier(x) + return x + + def set_bn_param(self, momentum, eps): + for m in self.modules(): + if isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d): + m.momentum = momentum + m.eps = eps + return + + def init_model(self, model_init='he_fout', init_div_groups=False): + for m in self.modules(): + if isinstance(m, nn.Conv2d): + if model_init == 'he_fout': + n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels + if init_div_groups: + n /= m.groups + m.weight.data.normal_(0, math.sqrt(2. / n)) + elif model_init == 'he_fin': + n = m.kernel_size[0] * m.kernel_size[1] * m.in_channels + if init_div_groups: + n /= m.groups + m.weight.data.normal_(0, math.sqrt(2. / n)) + else: + raise NotImplementedError + elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d): + m.weight.data.fill_(1) + m.bias.data.zero_() + elif isinstance(m, nn.Linear): + stdv = 1. / math.sqrt(m.weight.size(1)) + m.weight.data.uniform_(-stdv, stdv) + if m.bias is not None: + m.bias.data.zero_() diff --git a/examples/nas/proxylessnas/ops.py b/examples/nas/proxylessnas/ops.py new file mode 100644 index 0000000000..6ff0bbf1cc --- /dev/null +++ b/examples/nas/proxylessnas/ops.py @@ -0,0 +1,329 @@ +from collections import OrderedDict +import torch +import torch.nn as nn + +from putils import get_same_padding, build_activation + + +OPS = { + 'Identity': lambda in_C, out_C, stride: IdentityLayer(in_C, out_C, ops_order='weight_bn_act'), + 'Zero': lambda in_C, out_C, stride: ZeroLayer(stride=stride), + '3x3_MBConv1': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 3, stride, 1), + '3x3_MBConv2': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 3, stride, 2), + '3x3_MBConv3': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 3, stride, 3), + '3x3_MBConv4': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 3, stride, 4), + '3x3_MBConv5': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 3, stride, 5), + '3x3_MBConv6': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 3, stride, 6), + '5x5_MBConv1': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 5, stride, 1), + '5x5_MBConv2': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 5, stride, 2), + '5x5_MBConv3': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 5, stride, 3), + '5x5_MBConv4': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 5, stride, 4), + '5x5_MBConv5': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 5, stride, 5), + '5x5_MBConv6': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 5, stride, 6), + '7x7_MBConv1': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 7, stride, 1), + '7x7_MBConv2': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 7, stride, 2), + '7x7_MBConv3': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 7, stride, 3), + '7x7_MBConv4': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 7, stride, 4), + '7x7_MBConv5': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 7, stride, 5), + '7x7_MBConv6': lambda in_C, out_C, stride: MBInvertedConvLayer(in_C, out_C, 7, stride, 6) +} + + +class MobileInvertedResidualBlock(nn.Module): + + def __init__(self, mobile_inverted_conv, shortcut, op_candidates_list): + super(MobileInvertedResidualBlock, self).__init__() + + self.mobile_inverted_conv = mobile_inverted_conv + self.shortcut = shortcut + self.op_candidates_list = op_candidates_list + + def forward(self, x): + out, idx = self.mobile_inverted_conv(x) + # TODO: unify idx format + if not isinstance(idx, int): + idx = (idx == 1).nonzero() + if self.op_candidates_list[idx].is_zero_layer(): + res = x + elif self.shortcut is None: + res = out + else: + conv_x = out + skip_x = self.shortcut(x) + res = skip_x + conv_x + return res + + +class ShuffleLayer(nn.Module): + def __init__(self, groups): + super(ShuffleLayer, self).__init__() + self.groups = groups + + def forward(self, x): + batchsize, num_channels, height, width = x.size() + channels_per_group = num_channels // self.groups + # reshape + x = x.view(batchsize, self.groups, channels_per_group, height, width) + # noinspection PyUnresolvedReferences + x = torch.transpose(x, 1, 2).contiguous() + # flatten + x = x.view(batchsize, -1, height, width) + return x + +class Base2DLayer(nn.Module): + + def __init__(self, in_channels, out_channels, + use_bn=True, act_func='relu', dropout_rate=0, ops_order='weight_bn_act'): + super(Base2DLayer, self).__init__() + self.in_channels = in_channels + self.out_channels = out_channels + + self.use_bn = use_bn + self.act_func = act_func + self.dropout_rate = dropout_rate + self.ops_order = ops_order + + """ modules """ + modules = {} + # batch norm + if self.use_bn: + if self.bn_before_weight: + modules['bn'] = nn.BatchNorm2d(in_channels) + else: + modules['bn'] = nn.BatchNorm2d(out_channels) + else: + modules['bn'] = None + # activation + modules['act'] = build_activation(self.act_func, self.ops_list[0] != 'act') + # dropout + if self.dropout_rate > 0: + modules['dropout'] = nn.Dropout2d(self.dropout_rate, inplace=True) + else: + modules['dropout'] = None + # weight + modules['weight'] = self.weight_op() + + # add modules + for op in self.ops_list: + if modules[op] is None: + continue + elif op == 'weight': + if modules['dropout'] is not None: + self.add_module('dropout', modules['dropout']) + for key in modules['weight']: + self.add_module(key, modules['weight'][key]) + else: + self.add_module(op, modules[op]) + + @property + def ops_list(self): + return self.ops_order.split('_') + + @property + def bn_before_weight(self): + for op in self.ops_list: + if op == 'bn': + return True + elif op == 'weight': + return False + raise ValueError('Invalid ops_order: %s' % self.ops_order) + + def weight_op(self): + raise NotImplementedError + + def forward(self, x): + for module in self._modules.values(): + x = module(x) + return x + + @staticmethod + def is_zero_layer(): + return False + + +class ConvLayer(Base2DLayer): + + def __init__(self, in_channels, out_channels, + kernel_size=3, stride=1, dilation=1, groups=1, bias=False, has_shuffle=False, + use_bn=True, act_func='relu', dropout_rate=0, ops_order='weight_bn_act'): + self.kernel_size = kernel_size + self.stride = stride + self.dilation = dilation + self.groups = groups + self.bias = bias + self.has_shuffle = has_shuffle + + super(ConvLayer, self).__init__(in_channels, out_channels, use_bn, act_func, dropout_rate, ops_order) + + def weight_op(self): + padding = get_same_padding(self.kernel_size) + if isinstance(padding, int): + padding *= self.dilation + else: + padding[0] *= self.dilation + padding[1] *= self.dilation + + weight_dict = OrderedDict() + weight_dict['conv'] = nn.Conv2d( + self.in_channels, self.out_channels, kernel_size=self.kernel_size, stride=self.stride, padding=padding, + dilation=self.dilation, groups=self.groups, bias=self.bias + ) + if self.has_shuffle and self.groups > 1: + weight_dict['shuffle'] = ShuffleLayer(self.groups) + + return weight_dict + + +class IdentityLayer(Base2DLayer): + + def __init__(self, in_channels, out_channels, + use_bn=False, act_func=None, dropout_rate=0, ops_order='weight_bn_act'): + super(IdentityLayer, self).__init__(in_channels, out_channels, use_bn, act_func, dropout_rate, ops_order) + + def weight_op(self): + return None + + +class LinearLayer(nn.Module): + + def __init__(self, in_features, out_features, bias=True, + use_bn=False, act_func=None, dropout_rate=0, ops_order='weight_bn_act'): + super(LinearLayer, self).__init__() + + self.in_features = in_features + self.out_features = out_features + self.bias = bias + + self.use_bn = use_bn + self.act_func = act_func + self.dropout_rate = dropout_rate + self.ops_order = ops_order + + """ modules """ + modules = {} + # batch norm + if self.use_bn: + if self.bn_before_weight: + modules['bn'] = nn.BatchNorm1d(in_features) + else: + modules['bn'] = nn.BatchNorm1d(out_features) + else: + modules['bn'] = None + # activation + modules['act'] = build_activation(self.act_func, self.ops_list[0] != 'act') + # dropout + if self.dropout_rate > 0: + modules['dropout'] = nn.Dropout(self.dropout_rate, inplace=True) + else: + modules['dropout'] = None + # linear + modules['weight'] = {'linear': nn.Linear(self.in_features, self.out_features, self.bias)} + + # add modules + for op in self.ops_list: + if modules[op] is None: + continue + elif op == 'weight': + if modules['dropout'] is not None: + self.add_module('dropout', modules['dropout']) + for key in modules['weight']: + self.add_module(key, modules['weight'][key]) + else: + self.add_module(op, modules[op]) + + @property + def ops_list(self): + return self.ops_order.split('_') + + @property + def bn_before_weight(self): + for op in self.ops_list: + if op == 'bn': + return True + elif op == 'weight': + return False + raise ValueError('Invalid ops_order: %s' % self.ops_order) + + def forward(self, x): + for module in self._modules.values(): + x = module(x) + return x + + @staticmethod + def is_zero_layer(): + return False + + +class MBInvertedConvLayer(nn.Module): + """ + This layer is introduced in section 4.2 in the paper https://arxiv.org/pdf/1812.00332.pdf + """ + def __init__(self, in_channels, out_channels, + kernel_size=3, stride=1, expand_ratio=6, mid_channels=None): + super(MBInvertedConvLayer, self).__init__() + + self.in_channels = in_channels + self.out_channels = out_channels + + self.kernel_size = kernel_size + self.stride = stride + self.expand_ratio = expand_ratio + self.mid_channels = mid_channels + + if self.mid_channels is None: + feature_dim = round(self.in_channels * self.expand_ratio) + else: + feature_dim = self.mid_channels + + if self.expand_ratio == 1: + self.inverted_bottleneck = None + else: + self.inverted_bottleneck = nn.Sequential(OrderedDict([ + ('conv', nn.Conv2d(self.in_channels, feature_dim, 1, 1, 0, bias=False)), + ('bn', nn.BatchNorm2d(feature_dim)), + ('act', nn.ReLU6(inplace=True)), + ])) + + pad = get_same_padding(self.kernel_size) + self.depth_conv = nn.Sequential(OrderedDict([ + ('conv', nn.Conv2d(feature_dim, feature_dim, kernel_size, stride, pad, groups=feature_dim, bias=False)), + ('bn', nn.BatchNorm2d(feature_dim)), + ('act', nn.ReLU6(inplace=True)), + ])) + + self.point_linear = nn.Sequential(OrderedDict([ + ('conv', nn.Conv2d(feature_dim, out_channels, 1, 1, 0, bias=False)), + ('bn', nn.BatchNorm2d(out_channels)), + ])) + + def forward(self, x): + if self.inverted_bottleneck: + x = self.inverted_bottleneck(x) + x = self.depth_conv(x) + x = self.point_linear(x) + return x + + @staticmethod + def is_zero_layer(): + return False + + +class ZeroLayer(nn.Module): + + def __init__(self, stride): + super(ZeroLayer, self).__init__() + self.stride = stride + + def forward(self, x): + '''n, c, h, w = x.size() + h //= self.stride + w //= self.stride + device = x.get_device() if x.is_cuda else torch.device('cpu') + # noinspection PyUnresolvedReferences + padding = torch.zeros(n, c, h, w, device=device, requires_grad=False) + return padding''' + return x * 0 + + @staticmethod + def is_zero_layer(): + return True diff --git a/examples/nas/proxylessnas/putils.py b/examples/nas/proxylessnas/putils.py new file mode 100644 index 0000000000..c4900067a5 --- /dev/null +++ b/examples/nas/proxylessnas/putils.py @@ -0,0 +1,67 @@ +import torch.nn as nn + +def get_parameters(model, keys=None, mode='include'): + if keys is None: + for name, param in model.named_parameters(): + yield param + elif mode == 'include': + for name, param in model.named_parameters(): + flag = False + for key in keys: + if key in name: + flag = True + break + if flag: + yield param + elif mode == 'exclude': + for name, param in model.named_parameters(): + flag = True + for key in keys: + if key in name: + flag = False + break + if flag: + yield param + else: + raise ValueError('do not support: %s' % mode) + + +def get_same_padding(kernel_size): + if isinstance(kernel_size, tuple): + assert len(kernel_size) == 2, 'invalid kernel size: %s' % kernel_size + p1 = get_same_padding(kernel_size[0]) + p2 = get_same_padding(kernel_size[1]) + return p1, p2 + assert isinstance(kernel_size, int), 'kernel size should be either `int` or `tuple`' + assert kernel_size % 2 > 0, 'kernel size should be odd number' + return kernel_size // 2 + +def build_activation(act_func, inplace=True): + if act_func == 'relu': + return nn.ReLU(inplace=inplace) + elif act_func == 'relu6': + return nn.ReLU6(inplace=inplace) + elif act_func == 'tanh': + return nn.Tanh() + elif act_func == 'sigmoid': + return nn.Sigmoid() + elif act_func is None: + return None + else: + raise ValueError('do not support: %s' % act_func) + + +def make_divisible(v, divisor, min_val=None): + """ + This function is taken from the original tf repo. + It ensures that all layers have a channel number that is divisible by 8 + It can be seen here: + https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py + """ + if min_val is None: + min_val = divisor + new_v = max(min_val, int(v + divisor / 2) // divisor * divisor) + # Make sure that round down does not go down by more than 10%. + if new_v < 0.9 * v: + new_v += divisor + return new_v diff --git a/examples/nas/proxylessnas/retrain.py b/examples/nas/proxylessnas/retrain.py new file mode 100644 index 0000000000..a7afb62927 --- /dev/null +++ b/examples/nas/proxylessnas/retrain.py @@ -0,0 +1,183 @@ +import time +import math +from datetime import timedelta +import torch +from torch import nn as nn +from nni.nas.pytorch.utils import AverageMeter + +def cross_entropy_with_label_smoothing(pred, target, label_smoothing=0.1): + logsoftmax = nn.LogSoftmax() + n_classes = pred.size(1) + # convert to one-hot + target = torch.unsqueeze(target, 1) + soft_target = torch.zeros_like(pred) + soft_target.scatter_(1, target, 1) + # label smoothing + soft_target = soft_target * (1 - label_smoothing) + label_smoothing / n_classes + return torch.mean(torch.sum(- soft_target * logsoftmax(pred), 1)) + +def accuracy(output, target, topk=(1,)): + maxk = max(topk) + batch_size = target.size(0) + + _, pred = output.topk(maxk, 1, True, True) + pred = pred.t() + correct = pred.eq(target.view(1, -1).expand_as(pred)) + + res = [] + for k in topk: + correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) + res.append(correct_k.mul_(100.0 / batch_size)) + return res + + +class Retrain: + def __init__(self, model, optimizer, device, data_provider, n_epochs): + self.model = model + self.optimizer = optimizer + self.device = device + self.train_loader = data_provider.train + self.valid_loader = data_provider.valid + self.test_loader = data_provider.test + self.n_epochs = n_epochs + self.criterion = nn.CrossEntropyLoss() + + def run(self): + self.model = torch.nn.DataParallel(self.model) + self.model.to(self.device) + # train + self.train() + # validate + self.validate(is_test=False) + # test + self.validate(is_test=True) + + def train_one_epoch(self, adjust_lr_func, train_log_func, label_smoothing=0.1): + batch_time = AverageMeter('batch_time') + data_time = AverageMeter('data_time') + losses = AverageMeter('losses') + top1 = AverageMeter('top1') + top5 = AverageMeter('top5') + self.model.train() + end = time.time() + for i, (images, labels) in enumerate(self.train_loader): + data_time.update(time.time() - end) + new_lr = adjust_lr_func(i) + images, labels = images.to(self.device), labels.to(self.device) + output = self.model(images) + if label_smoothing > 0: + loss = cross_entropy_with_label_smoothing(output, labels, label_smoothing) + else: + loss = self.criterion(output, labels) + acc1, acc5 = accuracy(output, labels, topk=(1, 5)) + losses.update(loss, images.size(0)) + top1.update(acc1[0], images.size(0)) + top5.update(acc5[0], images.size(0)) + + # compute gradient and do SGD step + self.model.zero_grad() # or self.optimizer.zero_grad() + loss.backward() + self.optimizer.step() + + # measure elapsed time + batch_time.update(time.time() - end) + end = time.time() + + if i % 10 == 0 or i + 1 == len(self.train_loader): + batch_log = train_log_func(i, batch_time, data_time, losses, top1, top5, new_lr) + print(batch_log) + return top1, top5 + + def train(self, validation_frequency=1): + best_acc = 0 + nBatch = len(self.train_loader) + + def train_log_func(epoch_, i, batch_time, data_time, losses, top1, top5, lr): + batch_log = 'Train [{0}][{1}/{2}]\t' \ + 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' \ + 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' \ + 'Loss {losses.val:.4f} ({losses.avg:.4f})\t' \ + 'Top-1 acc {top1.val:.3f} ({top1.avg:.3f})'. \ + format(epoch_ + 1, i, nBatch - 1, + batch_time=batch_time, data_time=data_time, losses=losses, top1=top1) + batch_log += '\tTop-5 acc {top5.val:.3f} ({top5.avg:.3f})'.format(top5=top5) + batch_log += '\tlr {lr:.5f}'.format(lr=lr) + return batch_log + + def adjust_learning_rate(n_epochs, optimizer, epoch, batch=0, nBatch=None): + """ adjust learning of a given optimizer and return the new learning rate """ + # cosine + T_total = n_epochs * nBatch + T_cur = epoch * nBatch + batch + # init_lr = 0.05 + new_lr = 0.5 * 0.05 * (1 + math.cos(math.pi * T_cur / T_total)) + for param_group in optimizer.param_groups: + param_group['lr'] = new_lr + return new_lr + + for epoch in range(self.n_epochs): + print('\n', '-' * 30, 'Train epoch: %d' % (epoch + 1), '-' * 30, '\n') + end = time.time() + train_top1, train_top5 = self.train_one_epoch( + lambda i: adjust_learning_rate(self.n_epochs, self.optimizer, epoch, i, nBatch), + lambda i, batch_time, data_time, losses, top1, top5, new_lr: + train_log_func(epoch, i, batch_time, data_time, losses, top1, top5, new_lr), + ) + time_per_epoch = time.time() - end + seconds_left = int((self.n_epochs - epoch - 1) * time_per_epoch) + print('Time per epoch: %s, Est. complete in: %s' % ( + str(timedelta(seconds=time_per_epoch)), + str(timedelta(seconds=seconds_left)))) + + if (epoch + 1) % validation_frequency == 0: + val_loss, val_acc, val_acc5 = self.validate(is_test=False) + is_best = val_acc > best_acc + best_acc = max(best_acc, val_acc) + val_log = 'Valid [{0}/{1}]\tloss {2:.3f}\ttop-1 acc {3:.3f} ({4:.3f})'.\ + format(epoch + 1, self.n_epochs, val_loss, val_acc, best_acc) + val_log += '\ttop-5 acc {0:.3f}\tTrain top-1 {top1.avg:.3f}\ttop-5 {top5.avg:.3f}'.\ + format(val_acc5, top1=train_top1, top5=train_top5) + print(val_log) + else: + is_best = False + + def validate(self, is_test=True): + if is_test: + data_loader = self.test_loader + else: + data_loader = self.valid_loader + self.model.eval() + batch_time = AverageMeter('batch_time') + losses = AverageMeter('losses') + top1 = AverageMeter('top1') + top5 = AverageMeter('top5') + + end = time.time() + with torch.no_grad(): + for i, (images, labels) in enumerate(data_loader): + images, labels = images.to(self.device), labels.to(self.device) + # compute output + output = self.model(images) + loss = self.criterion(output, labels) + # measure accuracy and record loss + acc1, acc5 = accuracy(output, labels, topk=(1, 5)) + losses.update(loss, images.size(0)) + top1.update(acc1[0], images.size(0)) + top5.update(acc5[0], images.size(0)) + # measure elapsed time + batch_time.update(time.time() - end) + end = time.time() + + if i % 10 == 0 or i + 1 == len(data_loader): + if is_test: + prefix = 'Test' + else: + prefix = 'Valid' + test_log = prefix + ': [{0}/{1}]\t'\ + 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'\ + 'Loss {loss.val:.4f} ({loss.avg:.4f})\t'\ + 'Top-1 acc {top1.val:.3f} ({top1.avg:.3f})'.\ + format(i, len(data_loader) - 1, batch_time=batch_time, loss=losses, top1=top1) + test_log += '\tTop-5 acc {top5.val:.3f} ({top5.avg:.3f})'.format(top5=top5) + print(test_log) + return losses.avg, top1.avg, top5.avg \ No newline at end of file diff --git a/src/sdk/pynni/nni/nas/pytorch/base_mutator.py b/src/sdk/pynni/nni/nas/pytorch/base_mutator.py index 5c3502a51c..93a35c74ff 100644 --- a/src/sdk/pynni/nni/nas/pytorch/base_mutator.py +++ b/src/sdk/pynni/nni/nas/pytorch/base_mutator.py @@ -64,6 +64,10 @@ def mutables(self): """ return self._structured_mutables + @property + def undedup_mutables(self): + return self._structured_mutables.traverse(deduplicate=False) + def forward(self, *inputs): """ Warnings diff --git a/src/sdk/pynni/nni/nas/pytorch/proxylessnas/__init__.py b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/__init__.py new file mode 100644 index 0000000000..26feedba7d --- /dev/null +++ b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/__init__.py @@ -0,0 +1,2 @@ +from .mutator import ProxylessNasMutator +from .trainer import ProxylessNasTrainer diff --git a/src/sdk/pynni/nni/nas/pytorch/proxylessnas/mutator.py b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/mutator.py new file mode 100644 index 0000000000..6e3c7a5b60 --- /dev/null +++ b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/mutator.py @@ -0,0 +1,476 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import math +import torch +from torch import nn as nn +from torch.nn import functional as F +import numpy as np + +from nni.nas.pytorch.base_mutator import BaseMutator +from nni.nas.pytorch.mutables import LayerChoice +from .utils import detach_variable + +class ArchGradientFunction(torch.autograd.Function): + @staticmethod + def forward(ctx, x, binary_gates, run_func, backward_func): + ctx.run_func = run_func + ctx.backward_func = backward_func + + detached_x = detach_variable(x) + with torch.enable_grad(): + output = run_func(detached_x) + ctx.save_for_backward(detached_x, output) + return output.data + + @staticmethod + def backward(ctx, grad_output): + detached_x, output = ctx.saved_tensors + + grad_x = torch.autograd.grad(output, detached_x, grad_output, only_inputs=True) + # compute gradients w.r.t. binary_gates + binary_grads = ctx.backward_func(detached_x.data, output.data, grad_output.data) + + return grad_x[0], binary_grads, None, None + +class MixedOp(nn.Module): + """ + This class is to instantiate and manage info of one LayerChoice. + It includes architecture weights, binary weights, and member functions + operating the weights. + + forward_mode: + forward/backward mode for LayerChoice: None, two, full, and full_v2. + For training architecture weights, we use full_v2 by default, and for training + model weights, we use None. + """ + forward_mode = None + def __init__(self, mutable): + """ + Parameters + ---------- + mutable : LayerChoice + A LayerChoice in user model + """ + super(MixedOp, self).__init__() + self.ap_path_alpha = nn.Parameter(torch.Tensor(mutable.length)) + self.ap_path_wb = nn.Parameter(torch.Tensor(mutable.length)) + self.ap_path_alpha.requires_grad = False + self.ap_path_wb.requires_grad = False + self.active_index = [0] + self.inactive_index = None + self.log_prob = None + self.current_prob_over_ops = None + self.n_choices = mutable.length + + def get_ap_path_alpha(self): + return self.ap_path_alpha + + def to_requires_grad(self): + self.ap_path_alpha.requires_grad = True + self.ap_path_wb.requires_grad = True + + def to_disable_grad(self): + self.ap_path_alpha.requires_grad = False + self.ap_path_wb.requires_grad = False + + def forward(self, mutable, x): + """ + Define forward of LayerChoice. For 'full_v2', backward is also defined. + The 'two' mode is explained in section 3.2.1 in the paper. + The 'full_v2' mode is explained in Appendix D in the paper. + + Parameters + ---------- + mutable : LayerChoice + this layer's mutable + x : tensor + inputs of this layer, only support one input + + Returns + ------- + output: tensor + output of this layer + """ + if MixedOp.forward_mode == 'full' or MixedOp.forward_mode == 'two': + output = 0 + for _i in self.active_index: + oi = self.candidate_ops[_i](x) + output = output + self.ap_path_wb[_i] * oi + for _i in self.inactive_index: + oi = self.candidate_ops[_i](x) + output = output + self.ap_path_wb[_i] * oi.detach() + elif MixedOp.forward_mode == 'full_v2': + def run_function(key, candidate_ops, active_id): + def forward(_x): + return candidate_ops[active_id](_x) + return forward + + def backward_function(key, candidate_ops, active_id, binary_gates): + def backward(_x, _output, grad_output): + binary_grads = torch.zeros_like(binary_gates.data) + with torch.no_grad(): + for k in range(len(candidate_ops)): + if k != active_id: + out_k = candidate_ops[k](_x.data) + else: + out_k = _output.data + grad_k = torch.sum(out_k * grad_output) + binary_grads[k] = grad_k + return binary_grads + return backward + output = ArchGradientFunction.apply( + x, self.ap_path_wb, run_function(mutable.key, mutable.choices, self.active_index[0]), + backward_function(mutable.key, mutable.choices, self.active_index[0], self.ap_path_wb)) + else: + output = self.active_op(mutable)(x) + return output + + @property + def probs_over_ops(self): + """ + Apply softmax on alpha to generate probability distribution + + Returns + ------- + pytorch tensor + probability distribution + """ + probs = F.softmax(self.ap_path_alpha, dim=0) # softmax to probability + return probs + + @property + def chosen_index(self): + """ + choose the op with max prob + + Returns + ------- + int + index of the chosen one + numpy.float32 + prob of the chosen one + """ + probs = self.probs_over_ops.data.cpu().numpy() + index = int(np.argmax(probs)) + return index, probs[index] + + def active_op(self, mutable): + """ + assume only one path is active + + Returns + ------- + PyTorch module + the chosen operation + """ + return mutable.choices[self.active_index[0]] + + @property + def active_op_index(self): + """ + return active op's index, the active op is sampled + + Returns + ------- + int + index of the active op + """ + return self.active_index[0] + + def set_chosen_op_active(self): + """ + set chosen index, active and inactive indexes + """ + chosen_idx, _ = self.chosen_index + self.active_index = [chosen_idx] + self.inactive_index = [_i for _i in range(0, chosen_idx)] + \ + [_i for _i in range(chosen_idx + 1, self.n_choices)] + + def binarize(self, mutable): + """ + Sample based on alpha, and set binary weights accordingly. + ap_path_wb is set in this function, which is called binarize. + + Parameters + ---------- + mutable : LayerChoice + this layer's mutable + """ + self.log_prob = None + # reset binary gates + self.ap_path_wb.data.zero_() + probs = self.probs_over_ops + if MixedOp.forward_mode == 'two': + # sample two ops according to probs + sample_op = torch.multinomial(probs.data, 2, replacement=False) + probs_slice = F.softmax(torch.stack([ + self.ap_path_alpha[idx] for idx in sample_op + ]), dim=0) + self.current_prob_over_ops = torch.zeros_like(probs) + for i, idx in enumerate(sample_op): + self.current_prob_over_ops[idx] = probs_slice[i] + # choose one to be active and the other to be inactive according to probs_slice + c = torch.multinomial(probs_slice.data, 1)[0] # 0 or 1 + active_op = sample_op[c].item() + inactive_op = sample_op[1-c].item() + self.active_index = [active_op] + self.inactive_index = [inactive_op] + # set binary gate + self.ap_path_wb.data[active_op] = 1.0 + else: + sample = torch.multinomial(probs, 1)[0].item() + self.active_index = [sample] + self.inactive_index = [_i for _i in range(0, sample)] + \ + [_i for _i in range(sample + 1, len(mutable.choices))] + self.log_prob = torch.log(probs[sample]) + self.current_prob_over_ops = probs + self.ap_path_wb.data[sample] = 1.0 + # avoid over-regularization + for choice in mutable.choices: + for _, param in choice.named_parameters(): + param.grad = None + + @staticmethod + def delta_ij(i, j): + if i == j: + return 1 + else: + return 0 + + def set_arch_param_grad(self, mutable): + """ + Calculate alpha gradient for this LayerChoice. + It is calculated using gradient of binary gate, probs of ops. + """ + binary_grads = self.ap_path_wb.grad.data + if self.active_op(mutable).is_zero_layer(): + self.ap_path_alpha.grad = None + return + if self.ap_path_alpha.grad is None: + self.ap_path_alpha.grad = torch.zeros_like(self.ap_path_alpha.data) + if MixedOp.forward_mode == 'two': + involved_idx = self.active_index + self.inactive_index + probs_slice = F.softmax(torch.stack([ + self.ap_path_alpha[idx] for idx in involved_idx + ]), dim=0).data + for i in range(2): + for j in range(2): + origin_i = involved_idx[i] + origin_j = involved_idx[j] + self.ap_path_alpha.grad.data[origin_i] += \ + binary_grads[origin_j] * probs_slice[j] * (MixedOp.delta_ij(i, j) - probs_slice[i]) + for _i, idx in enumerate(self.active_index): + self.active_index[_i] = (idx, self.ap_path_alpha.data[idx].item()) + for _i, idx in enumerate(self.inactive_index): + self.inactive_index[_i] = (idx, self.ap_path_alpha.data[idx].item()) + else: + probs = self.probs_over_ops.data + for i in range(self.n_choices): + for j in range(self.n_choices): + self.ap_path_alpha.grad.data[i] += binary_grads[j] * probs[j] * (MixedOp.delta_ij(i, j) - probs[i]) + return + + def rescale_updated_arch_param(self): + """ + rescale architecture weights for the 'two' mode. + """ + if not isinstance(self.active_index[0], tuple): + assert self.active_op.is_zero_layer() + return + involved_idx = [idx for idx, _ in (self.active_index + self.inactive_index)] + old_alphas = [alpha for _, alpha in (self.active_index + self.inactive_index)] + new_alphas = [self.ap_path_alpha.data[idx] for idx in involved_idx] + + offset = math.log( + sum([math.exp(alpha) for alpha in new_alphas]) / sum([math.exp(alpha) for alpha in old_alphas]) + ) + + for idx in involved_idx: + self.ap_path_alpha.data[idx] -= offset + + +class ProxylessNasMutator(BaseMutator): + """ + This mutator initializes and operates all the LayerChoices of the input model. + It is for the corresponding trainer to control the training process of LayerChoices, + coordinating with whole training process. + """ + def __init__(self, model): + """ + Init a MixedOp instance for each mutable i.e., LayerChoice. + And register the instantiated MixedOp in corresponding LayerChoice. + If does not register it in LayerChoice, DataParallel does not work then, + because architecture weights are not included in the DataParallel model. + When MixedOPs are registered, we use ```requires_grad``` to control + whether calculate gradients of architecture weights. + + Parameters + ---------- + model : pytorch model + The model that users want to tune, it includes search space defined with nni nas apis + """ + super(ProxylessNasMutator, self).__init__(model) + self._unused_modules = None + self.mutable_list = [] + for mutable in self.undedup_mutables: + self.mutable_list.append(mutable) + mutable.registered_module = MixedOp(mutable) + + def on_forward_layer_choice(self, mutable, *inputs): + """ + Callback of layer choice forward. This function defines the forward + logic of the input mutable. So mutable is only interface, its real + implementation is defined in mutator. + + Parameters + ---------- + mutable: LayerChoice + forward logic of this input mutable + inputs: list of torch.Tensor + inputs of this mutable + + Returns + ------- + torch.Tensor + output of this mutable, i.e., LayerChoice + int + index of the chosen op + """ + # FIXME: return mask, to be consistent with other algorithms + idx = mutable.registered_module.active_op_index + return mutable.registered_module(mutable, *inputs), idx + + def reset_binary_gates(self): + """ + For each LayerChoice, binarize binary weights + based on alpha to only activate one op. + It traverses all the mutables in the model to do this. + """ + for mutable in self.undedup_mutables: + mutable.registered_module.binarize(mutable) + + def set_chosen_op_active(self): + """ + For each LayerChoice, set the op with highest alpha as the chosen op. + Usually used for validation. + """ + for mutable in self.undedup_mutables: + mutable.registered_module.set_chosen_op_active() + + def num_arch_params(self): + """ + The number of mutables, i.e., LayerChoice + + Returns + ------- + int + the number of LayerChoice in user model + """ + return len(self.mutable_list) + + def set_arch_param_grad(self): + """ + For each LayerChoice, calculate gradients for architecture weights, i.e., alpha + """ + for mutable in self.undedup_mutables: + mutable.registered_module.set_arch_param_grad(mutable) + + def get_architecture_parameters(self): + """ + Get all the architecture parameters. + + yield + ----- + PyTorch Parameter + Return ap_path_alpha of the traversed mutable + """ + for mutable in self.undedup_mutables: + yield mutable.registered_module.get_ap_path_alpha() + + def change_forward_mode(self, mode): + """ + Update forward mode of MixedOps, as training architecture weights and + model weights use different forward modes. + """ + MixedOp.forward_mode = mode + + def get_forward_mode(self): + """ + Get forward mode of MixedOp + + Returns + ------- + string + the current forward mode of MixedOp + """ + return MixedOp.forward_mode + + def rescale_updated_arch_param(self): + """ + Rescale architecture weights in 'two' mode. + """ + for mutable in self.undedup_mutables: + mutable.registered_module.rescale_updated_arch_param() + + def unused_modules_off(self): + """ + Remove unused modules for each mutables. + The removed modules are kept in ```self._unused_modules``` for resume later. + """ + self._unused_modules = [] + for mutable in self.undedup_mutables: + mixed_op = mutable.registered_module + unused = {} + if self.get_forward_mode() in ['full', 'two', 'full_v2']: + involved_index = mixed_op.active_index + mixed_op.inactive_index + else: + involved_index = mixed_op.active_index + for i in range(mixed_op.n_choices): + if i not in involved_index: + unused[i] = mutable.choices[i] + mutable.choices[i] = None + self._unused_modules.append(unused) + + def unused_modules_back(self): + """ + Resume the removed modules back. + """ + if self._unused_modules is None: + return + for m, unused in zip(self.mutable_list, self._unused_modules): + for i in unused: + m.choices[i] = unused[i] + self._unused_modules = None + + def arch_requires_grad(self): + """ + Make architecture weights require gradient + """ + for mutable in self.undedup_mutables: + mutable.registered_module.to_requires_grad() + + def arch_disable_grad(self): + """ + Disable gradient of architecture weights, i.e., does not + calcuate gradient for them. + """ + for mutable in self.undedup_mutables: + mutable.registered_module.to_disable_grad() + + def sample_final(self): + """ + Generate the final chosen architecture. + + Returns + ------- + dict + the choice of each mutable, i.e., LayerChoice + """ + result = dict() + for mutable in self.undedup_mutables: + assert isinstance(mutable, LayerChoice) + index, _ = mutable.registered_module.chosen_index + # pylint: disable=not-callable + result[mutable.key] = F.one_hot(torch.tensor(index), num_classes=mutable.length).view(-1).bool() + return result diff --git a/src/sdk/pynni/nni/nas/pytorch/proxylessnas/trainer.py b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/trainer.py new file mode 100644 index 0000000000..d9c86a6a9f --- /dev/null +++ b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/trainer.py @@ -0,0 +1,500 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import math +import time +import json +import logging + +import torch +from torch import nn as nn + +from nni.nas.pytorch.base_trainer import BaseTrainer +from nni.nas.pytorch.trainer import TorchTensorEncoder +from nni.nas.pytorch.utils import AverageMeter +from .mutator import ProxylessNasMutator +from .utils import cross_entropy_with_label_smoothing, accuracy + +logger = logging.getLogger(__name__) + +class ProxylessNasTrainer(BaseTrainer): + def __init__(self, model, model_optim, device, + train_loader, valid_loader, label_smoothing=0.1, + n_epochs=120, init_lr=0.025, binary_mode='full_v2', + arch_init_type='normal', arch_init_ratio=1e-3, + arch_optim_lr=1e-3, arch_weight_decay=0, + grad_update_arch_param_every=5, grad_update_steps=1, + warmup=True, warmup_epochs=25, + arch_valid_frequency=1, + load_ckpt=False, ckpt_path=None, arch_path=None): + """ + Parameters + ---------- + model : pytorch model + the user model, which has mutables + model_optim : pytorch optimizer + the user defined optimizer + device : pytorch device + the devices to train/search the model + train_loader : pytorch data loader + data loader for the training set + valid_loader : pytorch data loader + data loader for the validation set + label_smoothing : float + for label smoothing + n_epochs : int + number of epochs to train/search + init_lr : float + init learning rate for training the model + binary_mode : str + the forward/backward mode for the binary weights in mutator + arch_init_type : str + the way to init architecture parameters + arch_init_ratio : float + the ratio to init architecture parameters + arch_optim_lr : float + learning rate of the architecture parameters optimizer + arch_weight_decay : float + weight decay of the architecture parameters optimizer + grad_update_arch_param_every : int + update architecture weights every this number of minibatches + grad_update_steps : int + during each update of architecture weights, the number of steps to train + warmup : bool + whether to do warmup + warmup_epochs : int + the number of epochs to do during warmup + arch_valid_frequency : int + frequency of printing validation result + load_ckpt : bool + whether load checkpoint + ckpt_path : str + checkpoint path, if load_ckpt is True, ckpt_path cannot be None + arch_path : str + the path to store chosen architecture + """ + self.model = model + self.model_optim = model_optim + self.train_loader = train_loader + self.valid_loader = valid_loader + self.device = device + self.n_epochs = n_epochs + self.init_lr = init_lr + self.warmup = warmup + self.warmup_epochs = warmup_epochs + self.arch_valid_frequency = arch_valid_frequency + self.label_smoothing = label_smoothing + + self.train_batch_size = train_loader.batch_sampler.batch_size + self.valid_batch_size = valid_loader.batch_sampler.batch_size + # update architecture parameters every this number of minibatches + self.grad_update_arch_param_every = grad_update_arch_param_every + # the number of steps per architecture parameter update + self.grad_update_steps = grad_update_steps + self.binary_mode = binary_mode + + self.load_ckpt = load_ckpt + self.ckpt_path = ckpt_path + self.arch_path = arch_path + + # init mutator + self.mutator = ProxylessNasMutator(model) + + # DataParallel should be put behind the init of mutator + self.model = torch.nn.DataParallel(self.model) + self.model.to(self.device) + + # iter of valid dataset for training architecture weights + self._valid_iter = None + # init architecture weights + self._init_arch_params(arch_init_type, arch_init_ratio) + # build architecture optimizer + self.arch_optimizer = torch.optim.Adam(self.mutator.get_architecture_parameters(), + arch_optim_lr, + weight_decay=arch_weight_decay, + betas=(0, 0.999), + eps=1e-8) + + self.criterion = nn.CrossEntropyLoss() + self.warmup_curr_epoch = 0 + self.train_curr_epoch = 0 + + def _init_arch_params(self, init_type='normal', init_ratio=1e-3): + """ + Initialize architecture weights + """ + for param in self.mutator.get_architecture_parameters(): + if init_type == 'normal': + param.data.normal_(0, init_ratio) + elif init_type == 'uniform': + param.data.uniform_(-init_ratio, init_ratio) + else: + raise NotImplementedError + + def _validate(self): + """ + Do validation. During validation, LayerChoices use the chosen active op. + + Returns + ------- + float, float, float + average loss, average top1 accuracy, average top5 accuracy + """ + self.valid_loader.batch_sampler.batch_size = self.valid_batch_size + self.valid_loader.batch_sampler.drop_last = False + + self.mutator.set_chosen_op_active() + # remove unused modules to save memory + self.mutator.unused_modules_off() + # test on validation set under train mode + self.model.train() + batch_time = AverageMeter('batch_time') + losses = AverageMeter('losses') + top1 = AverageMeter('top1') + top5 = AverageMeter('top5') + end = time.time() + with torch.no_grad(): + for i, (images, labels) in enumerate(self.valid_loader): + images, labels = images.to(self.device), labels.to(self.device) + output = self.model(images) + loss = self.criterion(output, labels) + acc1, acc5 = accuracy(output, labels, topk=(1, 5)) + losses.update(loss, images.size(0)) + top1.update(acc1[0], images.size(0)) + top5.update(acc5[0], images.size(0)) + # measure elapsed time + batch_time.update(time.time() - end) + end = time.time() + + if i % 10 == 0 or i + 1 == len(self.valid_loader): + test_log = 'Valid' + ': [{0}/{1}]\t'\ + 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'\ + 'Loss {loss.val:.4f} ({loss.avg:.4f})\t'\ + 'Top-1 acc {top1.val:.3f} ({top1.avg:.3f})'.\ + format(i, len(self.valid_loader) - 1, batch_time=batch_time, loss=losses, top1=top1) + # return top5: + test_log += '\tTop-5 acc {top5.val:.3f} ({top5.avg:.3f})'.format(top5=top5) + logger.info(test_log) + self.mutator.unused_modules_back() + return losses.avg, top1.avg, top5.avg + + def _warm_up(self): + """ + Warm up the model, during warm up, architecture weights are not trained. + """ + lr_max = 0.05 + data_loader = self.train_loader + nBatch = len(data_loader) + T_total = self.warmup_epochs * nBatch # total num of batches + + for epoch in range(self.warmup_curr_epoch, self.warmup_epochs): + logger.info('\n--------Warmup epoch: %d--------\n', epoch + 1) + batch_time = AverageMeter('batch_time') + data_time = AverageMeter('data_time') + losses = AverageMeter('losses') + top1 = AverageMeter('top1') + top5 = AverageMeter('top5') + # switch to train mode + self.model.train() + + end = time.time() + logger.info('warm_up epoch: %d', epoch) + for i, (images, labels) in enumerate(data_loader): + data_time.update(time.time() - end) + # lr + T_cur = epoch * nBatch + i + warmup_lr = 0.5 * lr_max * (1 + math.cos(math.pi * T_cur / T_total)) + for param_group in self.model_optim.param_groups: + param_group['lr'] = warmup_lr + images, labels = images.to(self.device), labels.to(self.device) + # compute output + self.mutator.reset_binary_gates() # random sample binary gates + self.mutator.unused_modules_off() # remove unused module for speedup + output = self.model(images) + if self.label_smoothing > 0: + loss = cross_entropy_with_label_smoothing(output, labels, self.label_smoothing) + else: + loss = self.criterion(output, labels) + # measure accuracy and record loss + acc1, acc5 = accuracy(output, labels, topk=(1, 5)) + losses.update(loss, images.size(0)) + top1.update(acc1[0], images.size(0)) + top5.update(acc5[0], images.size(0)) + # compute gradient and do SGD step + self.model.zero_grad() + loss.backward() + self.model_optim.step() + # unused modules back + self.mutator.unused_modules_back() + # measure elapsed time + batch_time.update(time.time() - end) + end = time.time() + + if i % 10 == 0 or i + 1 == nBatch: + batch_log = 'Warmup Train [{0}][{1}/{2}]\t' \ + 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' \ + 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' \ + 'Loss {losses.val:.4f} ({losses.avg:.4f})\t' \ + 'Top-1 acc {top1.val:.3f} ({top1.avg:.3f})\t' \ + 'Top-5 acc {top5.val:.3f} ({top5.avg:.3f})\tlr {lr:.5f}'. \ + format(epoch + 1, i, nBatch - 1, batch_time=batch_time, data_time=data_time, + losses=losses, top1=top1, top5=top5, lr=warmup_lr) + logger.info(batch_log) + val_loss, val_top1, val_top5 = self._validate() + val_log = 'Warmup Valid [{0}/{1}]\tloss {2:.3f}\ttop-1 acc {3:.3f}\ttop-5 acc {4:.3f}\t' \ + 'Train top-1 {top1.avg:.3f}\ttop-5 {top5.avg:.3f}M'. \ + format(epoch + 1, self.warmup_epochs, val_loss, val_top1, val_top5, top1=top1, top5=top5) + logger.info(val_log) + self.save_checkpoint() + self.warmup_curr_epoch += 1 + + def _get_update_schedule(self, nBatch): + """ + Generate schedule for training architecture weights. Key means after which minibatch + to update architecture weights, value means how many steps for the update. + + Parameters + ---------- + nBatch : int + the total number of minibatches in one epoch + + Returns + ------- + dict + the schedule for updating architecture weights + """ + schedule = {} + for i in range(nBatch): + if (i + 1) % self.grad_update_arch_param_every == 0: + schedule[i] = self.grad_update_steps + return schedule + + def _calc_learning_rate(self, epoch, batch=0, nBatch=None): + """ + Update learning rate. + """ + T_total = self.n_epochs * nBatch + T_cur = epoch * nBatch + batch + lr = 0.5 * self.init_lr * (1 + math.cos(math.pi * T_cur / T_total)) + return lr + + def _adjust_learning_rate(self, optimizer, epoch, batch=0, nBatch=None): + """ + Adjust learning of a given optimizer and return the new learning rate + + Parameters + ---------- + optimizer : pytorch optimizer + the used optimizer + epoch : int + the current epoch number + batch : int + the current minibatch + nBatch : int + the total number of minibatches in one epoch + + Returns + ------- + float + the adjusted learning rate + """ + new_lr = self._calc_learning_rate(epoch, batch, nBatch) + for param_group in optimizer.param_groups: + param_group['lr'] = new_lr + return new_lr + + def _train(self): + """ + Train the model, it trains model weights and architecute weights. + Architecture weights are trained according to the schedule. + Before updating architecture weights, ```requires_grad``` is enabled. + Then, it is disabled after the updating, in order not to update + architecture weights when training model weights. + """ + nBatch = len(self.train_loader) + arch_param_num = self.mutator.num_arch_params() + binary_gates_num = self.mutator.num_arch_params() + logger.info('#arch_params: %d\t#binary_gates: %d', arch_param_num, binary_gates_num) + + update_schedule = self._get_update_schedule(nBatch) + + for epoch in range(self.train_curr_epoch, self.n_epochs): + logger.info('\n--------Train epoch: %d--------\n', epoch + 1) + batch_time = AverageMeter('batch_time') + data_time = AverageMeter('data_time') + losses = AverageMeter('losses') + top1 = AverageMeter('top1') + top5 = AverageMeter('top5') + # switch to train mode + self.model.train() + + end = time.time() + for i, (images, labels) in enumerate(self.train_loader): + data_time.update(time.time() - end) + lr = self._adjust_learning_rate(self.model_optim, epoch, batch=i, nBatch=nBatch) + # train weight parameters + images, labels = images.to(self.device), labels.to(self.device) + self.mutator.reset_binary_gates() + self.mutator.unused_modules_off() + output = self.model(images) + if self.label_smoothing > 0: + loss = cross_entropy_with_label_smoothing(output, labels, self.label_smoothing) + else: + loss = self.criterion(output, labels) + acc1, acc5 = accuracy(output, labels, topk=(1, 5)) + losses.update(loss, images.size(0)) + top1.update(acc1[0], images.size(0)) + top5.update(acc5[0], images.size(0)) + self.model.zero_grad() + loss.backward() + self.model_optim.step() + self.mutator.unused_modules_back() + if epoch > 0: + for _ in range(update_schedule.get(i, 0)): + start_time = time.time() + # GradientArchSearchConfig + self.mutator.arch_requires_grad() + arch_loss, exp_value = self._gradient_step() + self.mutator.arch_disable_grad() + used_time = time.time() - start_time + log_str = 'Architecture [%d-%d]\t Time %.4f\t Loss %.4f\t null %s' % \ + (epoch + 1, i, used_time, arch_loss, exp_value) + logger.info(log_str) + batch_time.update(time.time() - end) + end = time.time() + # training log + if i % 10 == 0 or i + 1 == nBatch: + batch_log = 'Train [{0}][{1}/{2}]\t' \ + 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' \ + 'Data Time {data_time.val:.3f} ({data_time.avg:.3f})\t' \ + 'Loss {losses.val:.4f} ({losses.avg:.4f})\t' \ + 'Top-1 acc {top1.val:.3f} ({top1.avg:.3f})\t' \ + 'Top-5 acc {top5.val:.3f} ({top5.avg:.3f})\tlr {lr:.5f}'. \ + format(epoch + 1, i, nBatch - 1, batch_time=batch_time, data_time=data_time, + losses=losses, top1=top1, top5=top5, lr=lr) + logger.info(batch_log) + # validate + if (epoch + 1) % self.arch_valid_frequency == 0: + val_loss, val_top1, val_top5 = self._validate() + val_log = 'Valid [{0}]\tloss {1:.3f}\ttop-1 acc {2:.3f} \ttop-5 acc {3:.3f}\t' \ + 'Train top-1 {top1.avg:.3f}\ttop-5 {top5.avg:.3f}'. \ + format(epoch + 1, val_loss, val_top1, val_top5, top1=top1, top5=top5) + logger.info(val_log) + self.save_checkpoint() + self.train_curr_epoch += 1 + + def _valid_next_batch(self): + """ + Get next one minibatch from validation set + + Returns + ------- + (tensor, tensor) + the tuple of images and labels + """ + if self._valid_iter is None: + self._valid_iter = iter(self.valid_loader) + try: + data = next(self._valid_iter) + except StopIteration: + self._valid_iter = iter(self.valid_loader) + data = next(self._valid_iter) + return data + + def _gradient_step(self): + """ + This gradient step is for updating architecture weights. + Mutator is intensively used in this function to operate on + architecture weights. + + Returns + ------- + float, None + loss of the model, None + """ + # use the same batch size as train batch size for architecture weights + self.valid_loader.batch_sampler.batch_size = self.train_batch_size + self.valid_loader.batch_sampler.drop_last = True + self.model.train() + self.mutator.change_forward_mode(self.binary_mode) + time1 = time.time() # time + # sample a batch of data from validation set + images, labels = self._valid_next_batch() + images, labels = images.to(self.device), labels.to(self.device) + time2 = time.time() # time + self.mutator.reset_binary_gates() + self.mutator.unused_modules_off() + output = self.model(images) + time3 = time.time() + ce_loss = self.criterion(output, labels) + expected_value = None + loss = ce_loss + self.model.zero_grad() + loss.backward() + self.mutator.set_arch_param_grad() + self.arch_optimizer.step() + if self.mutator.get_forward_mode() == 'two': + self.mutator.rescale_updated_arch_param() + self.mutator.unused_modules_back() + self.mutator.change_forward_mode(None) + time4 = time.time() + logger.info('(%.4f, %.4f, %.4f)', time2 - time1, time3 - time2, time4 - time3) + return loss.data.item(), expected_value.item() if expected_value is not None else None + + def save_checkpoint(self): + """ + Save checkpoint of the whole model. Saving model weights and architecture weights in + ```ckpt_path```, and saving currently chosen architecture in ```arch_path```. + """ + if self.ckpt_path: + state = { + 'warmup_curr_epoch': self.warmup_curr_epoch, + 'train_curr_epoch': self.train_curr_epoch, + 'model': self.model.state_dict(), + 'optim': self.model_optim.state_dict(), + 'arch_optim': self.arch_optimizer.state_dict() + } + torch.save(state, self.ckpt_path) + if self.arch_path: + self.export(self.arch_path) + + def load_checkpoint(self): + """ + Load the checkpoint from ```ckpt_path```. + """ + assert self.ckpt_path is not None, "If load_ckpt is not None, ckpt_path should not be None" + ckpt = torch.load(self.ckpt_path) + self.warmup_curr_epoch = ckpt['warmup_curr_epoch'] + self.train_curr_epoch = ckpt['train_curr_epoch'] + self.model.load_state_dict(ckpt['model']) + self.model_optim.load_state_dict(ckpt['optim']) + self.arch_optimizer.load_state_dict(ckpt['arch_optim']) + + def train(self): + """ + Train the whole model. + """ + if self.load_ckpt: + self.load_checkpoint() + if self.warmup: + self._warm_up() + self._train() + + def export(self, file_name): + """ + Export the chosen architecture into a file + + Parameters + ---------- + file_name : str + the file that stores exported chosen architecture + """ + exported_arch = self.mutator.sample_final() + with open(file_name, 'w') as f: + json.dump(exported_arch, f, indent=2, sort_keys=True, cls=TorchTensorEncoder) + + def validate(self): + raise NotImplementedError + + def checkpoint(self): + raise NotImplementedError diff --git a/src/sdk/pynni/nni/nas/pytorch/proxylessnas/utils.py b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/utils.py new file mode 100644 index 0000000000..b703810d3b --- /dev/null +++ b/src/sdk/pynni/nni/nas/pytorch/proxylessnas/utils.py @@ -0,0 +1,78 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. + +import torch +import torch.nn as nn + +def detach_variable(inputs): + """ + Detach variables + + Parameters + ---------- + inputs : pytorch tensors + pytorch tensors + """ + if isinstance(inputs, tuple): + return tuple([detach_variable(x) for x in inputs]) + else: + x = inputs.detach() + x.requires_grad = inputs.requires_grad + return x + +def cross_entropy_with_label_smoothing(pred, target, label_smoothing=0.1): + """ + Parameters + ---------- + pred : pytorch tensor + predicted value + target : pytorch tensor + label + label_smoothing : float + the degree of label smoothing + + Returns + ------- + pytorch tensor + cross entropy + """ + logsoftmax = nn.LogSoftmax() + n_classes = pred.size(1) + # convert to one-hot + target = torch.unsqueeze(target, 1) + soft_target = torch.zeros_like(pred) + soft_target.scatter_(1, target, 1) + # label smoothing + soft_target = soft_target * (1 - label_smoothing) + label_smoothing / n_classes + return torch.mean(torch.sum(- soft_target * logsoftmax(pred), 1)) + +def accuracy(output, target, topk=(1,)): + """ + Computes the precision@k for the specified values of k + + Parameters + ---------- + output : pytorch tensor + output, e.g., predicted value + target : pytorch tensor + label + topk : tuple + specify top1 and top5 + + Returns + ------- + list + accuracy of top1 and top5 + """ + maxk = max(topk) + batch_size = target.size(0) + + _, pred = output.topk(maxk, 1, True, True) + pred = pred.t() + correct = pred.eq(target.view(1, -1).expand_as(pred)) + + res = [] + for k in topk: + correct_k = correct[:k].view(-1).float().sum(0, keepdim=True) + res.append(correct_k.mul_(100.0 / batch_size)) + return res From 2e84b445125aa2365eb5e79c94287d869db3366d Mon Sep 17 00:00:00 2001 From: Chi Song <27178119+squirrelsc@users.noreply.github.com> Date: Tue, 11 Feb 2020 13:34:04 +0800 Subject: [PATCH 9/9] Chinese translation (#2015) --- docs/zh_CN/Assessor/BuiltinAssessor.md | 2 +- docs/zh_CN/Assessor/MedianstopAssessor.md | 2 +- docs/zh_CN/FeatureEngineering/Overview.md | 8 +- docs/zh_CN/NAS/Overview.md | 43 +++---- docs/zh_CN/NAS/Proxylessnas.md | 63 ++++++++++ docs/zh_CN/Release.md | 2 +- docs/zh_CN/TrainingService/PaiMode.md | 39 +++++- docs/zh_CN/TrainingService/PaiYarnMode.md | 10 +- docs/zh_CN/Tutorial/ExperimentConfig.md | 4 +- docs/zh_CN/Tutorial/FAQ.md | 2 +- docs/zh_CN/Tutorial/HowToUseDocker.md | 2 +- docs/zh_CN/Tutorial/InstallationLinux.md | 119 ++++++++++++++++++ docs/zh_CN/Tutorial/InstallationWin.md | 139 ++++++++++++++++++++++ docs/zh_CN/Tutorial/Nnictl.md | 24 ++-- docs/zh_CN/Tutorial/QuickStart.md | 2 +- docs/zh_CN/builtin_assessor.rst | 10 ++ docs/zh_CN/builtin_tuner.rst | 7 +- docs/zh_CN/feature_engineering.rst | 5 +- docs/zh_CN/hpo_advanced.rst | 9 ++ docs/zh_CN/hyperparameter_tune.rst | 27 +++++ docs/zh_CN/index.rst | 18 +-- docs/zh_CN/installation.rst | 12 ++ docs/zh_CN/model_compression.rst | 11 +- docs/zh_CN/nas.rst | 19 +-- docs/zh_CN/reference.rst | 13 +- examples/model_compress/speedup_zh_CN.md | 96 +++++++++++++++ 26 files changed, 595 insertions(+), 93 deletions(-) create mode 100644 docs/zh_CN/NAS/Proxylessnas.md create mode 100644 docs/zh_CN/Tutorial/InstallationLinux.md create mode 100644 docs/zh_CN/Tutorial/InstallationWin.md create mode 100644 docs/zh_CN/hpo_advanced.rst create mode 100644 docs/zh_CN/hyperparameter_tune.rst create mode 100644 docs/zh_CN/installation.rst create mode 100644 examples/model_compress/speedup_zh_CN.md diff --git a/docs/zh_CN/Assessor/BuiltinAssessor.md b/docs/zh_CN/Assessor/BuiltinAssessor.md index 93f72d251e..69b0a0aa58 100644 --- a/docs/zh_CN/Assessor/BuiltinAssessor.md +++ b/docs/zh_CN/Assessor/BuiltinAssessor.md @@ -8,7 +8,7 @@ NNI 提供了先进的调优算法,使用上也很简单。 下面是内置 As | Assessor | 算法简介 | | --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| [**Medianstop**](#MedianStop) | Medianstop 是一个简单的提前终止算法。 如果 Trial X 的在步骤 S 的最好目标值比所有已完成 Trial 的步骤 S 的中位数值明显要低,这个 Trial 就会被提前停止。 [参考论文](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf) | +| [**Medianstop**](#MedianStop) | Medianstop 是一个简单的提前终止算法。 如果 Trial X 在步骤 S 的最好目标值低于所有已完成 Trial 前 S 个步骤目标平均值的中位数,这个 Trial 就会被提前停止。 [参考论文](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf) | | [**Curvefitting**](#Curvefitting) | Curve Fitting Assessor 是一个 LPA (learning, predicting, assessing,即学习、预测、评估) 的算法。 如果预测的 Trial X 在 step S 比性能最好的 Trial 要差,就会提前终止它。 此算法中采用了 12 种曲线来拟合精度曲线。 [参考论文](http://aad.informatik.uni-freiburg.de/papers/15-IJCAI-Extrapolation_of_Learning_Curves.pdf) | ## 用法 diff --git a/docs/zh_CN/Assessor/MedianstopAssessor.md b/docs/zh_CN/Assessor/MedianstopAssessor.md index 86f6f3b48b..805a08dc96 100644 --- a/docs/zh_CN/Assessor/MedianstopAssessor.md +++ b/docs/zh_CN/Assessor/MedianstopAssessor.md @@ -2,4 +2,4 @@ ## Median Stop -Medianstop 是一种简单的提前终止 Trial 的策略,可参考[论文](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf)。 如果 Trial X 的在步骤 S 的最好目标值比所有已完成 Trial 的步骤 S 的中位数值明显要低,这个 Trial 就会被提前停止。 \ No newline at end of file +Medianstop 是一种简单的提前终止 Trial 的策略,可参考[论文](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46180.pdf)。 如果 Trial X 在步骤 S 的最好目标值低于所有已完成 Trial 前 S 个步骤目标平均值的中位数,这个 Trial 就会被提前停止。 \ No newline at end of file diff --git a/docs/zh_CN/FeatureEngineering/Overview.md b/docs/zh_CN/FeatureEngineering/Overview.md index 0ac9e0d399..5d2e841dba 100644 --- a/docs/zh_CN/FeatureEngineering/Overview.md +++ b/docs/zh_CN/FeatureEngineering/Overview.md @@ -7,7 +7,7 @@ - [GBDTSelector](./GBDTSelector.md) -# 如何使用 +## 如何使用 ```python from nni.feature_engineering.gradient_selector import GradientFeatureSelector @@ -30,7 +30,7 @@ print(fgs.get_selected_features(...)) 使用内置 Selector 时,需要 `import` 对应的特征选择器,并 `initialize`。 可在 Selector 中调用 `fit` 函数来传入数据。 之后,可通过 `get_seleteced_features` 来获得重要的特征。 不同 Selector 的函数参数可能不同,在使用前需要先检查文档。 -# 如何定制 +## 如何定制? NNI 内置了_最先进的_特征工程算法的 Selector。 NNI 也支持定制自己的特征 Selector。 @@ -238,7 +238,7 @@ print("Pipeline Score: ", pipeline.score(X_train, y_train)) ``` -# 基准测试 +## 基准测试 `Baseline` 表示没有进行特征选择,直接将数据传入 LogisticRegression。 此基准测试中,仅用了 10% 的训练数据作为测试数据。 对于 GradientFeatureSelector,仅使用了前 20 个特征。 下列指标是在给定测试数据和标签上的平均精度。 @@ -255,7 +255,7 @@ print("Pipeline Score: ", pipeline.score(X_train, y_train)) 代码参考 `/examples/feature_engineering/gradient_feature_selector/benchmark_test.py`。 -## **参考和反馈** +## 参考和反馈 * 在 GitHub 中[提交此功能的 Bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md); * 在 GitHub 中[提交新功能或改进请求](https://github.com/microsoft/nni/issues/new?template=enhancement.md); * 了解 NNI 中[神经网络结构搜索的更多信息](https://github.com/microsoft/nni/blob/master/docs/zh_CN/NAS/Overview.md); diff --git a/docs/zh_CN/NAS/Overview.md b/docs/zh_CN/NAS/Overview.md index fc6c734c81..c96c1ea6da 100644 --- a/docs/zh_CN/NAS/Overview.md +++ b/docs/zh_CN/NAS/Overview.md @@ -6,23 +6,20 @@ 以此为动力,NNI 的目标是提供统一的体系结构,以加速NAS上的创新,并将最新的算法更快地应用于现实世界中的问题上。 -通过[统一的接口](./NasInterface.md),有两种方式进行架构搜索。 [一种](#supported-one-shot-nas-algorithms)称为 one-shot NAS,基于搜索空间构建了一个超级网络,并使用 one-shot 训练来生成性能良好的子模型。 [第二种](./NasInterface.md#经典分布式搜索)是传统的搜索方法,搜索空间中每个子模型作为独立的 Trial 运行,将性能结果发给 Tuner,由 Tuner 来生成新的子模型。 - -* [支持的 One-shot NAS 算法](#supported-one-shot-nas-algorithms) -* [使用 NNI Experiment 的经典分布式 NAS](./NasInterface.md#经典分布式搜索) -* [NNI NAS 编程接口](./NasInterface.md) +通过统一的接口,有两种方法来使用神经网络架构搜索。 [一种](#supported-one-shot-nas-algorithms)称为 one-shot NAS,基于搜索空间构建了一个超级网络,并使用 one-shot 训练来生成性能良好的子模型。 [第二种](#支持的分布式-nas-算法)是传统的搜索方法,搜索空间中每个子模型作为独立的 Trial 运行,将性能结果发给 Tuner,由 Tuner 来生成新的子模型。 ## 支持的 One-shot NAS 算法 NNI 现在支持以下 NAS 算法,并且正在添加更多算法。 用户可以重现算法或在自己的数据集上使用它。 鼓励用户使用 [NNI API](#use-nni-api) 实现其它算法,以使更多人受益。 -| 名称 | 算法简介 | -| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| [ENAS](ENAS.md) | [Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/abs/1802.03268). 在 ENAS 中,Contoller 学习在大的计算图中搜索最有子图的方式来发现神经网络。 它通过在子模型间共享参数来实现加速和出色的性能指标。 | -| [DARTS](DARTS.md) | [DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) 引入了一种在两级网络优化中使用的可微分算法。 | -| [P-DARTS](PDARTS.md) | [Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) 基于DARTS。 它引入了一种有效的算法,可在搜索过程中逐渐增加搜索的深度。 | -| [SPOS](SPOS.md) | 论文 [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) 构造了一个采用统一的路径采样方法来训练简化的超网络,并使用进化算法来提高搜索神经网络结构的效率。 | -| [CDARTS](CDARTS.md) | [Cyclic Differentiable Architecture Search](https://arxiv.org/abs/****) 在搜索和评估的网络见构建了循环反馈的机制。 通过引入的循环的可微分架构搜索框架将两个网络集成为一个架构。 | +| 名称 | 算法简介 | +| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [ENAS](ENAS.md) | [Efficient Neural Architecture Search via Parameter Sharing](https://arxiv.org/abs/1802.03268). 在 ENAS 中,Contoller 学习在大的计算图中搜索最有子图的方式来发现神经网络。 它通过在子模型间共享参数来实现加速和出色的性能指标。 | +| [DARTS](DARTS.md) | [DARTS: Differentiable Architecture Search](https://arxiv.org/abs/1806.09055) 引入了一种在两级网络优化中使用的可微分算法。 | +| [P-DARTS](PDARTS.md) | [Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation](https://arxiv.org/abs/1904.12760) 基于DARTS。 它引入了一种有效的算法,可在搜索过程中逐渐增加搜索的深度。 | +| [SPOS](SPOS.md) | 论文 [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) 构造了一个采用统一的路径采样方法来训练简化的超网络,并使用进化算法来提高搜索神经网络结构的效率。 | +| [CDARTS](CDARTS.md) | [Cyclic Differentiable Architecture Search](https://arxiv.org/abs/****) 在搜索和评估的网络见构建了循环反馈的机制。 通过引入的循环的可微分架构搜索框架将两个网络集成为一个架构。 | +| [ProxylessNAS](Proxylessnas.md) | [ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https://arxiv.org/abs/1812.00332). | One-shot 算法**不需要 nnictl,可单独运行**。 只实现了 PyTorch 版本。 将来的版本会支持 Tensorflow 2.x。 @@ -33,22 +30,26 @@ One-shot 算法**不需要 nnictl,可单独运行**。 只实现了 PyTorch * PyTorch 1.2+ * git -## 使用 NNI API +## 支持的分布式 NAS 算法 -注意,我们正在尝试通过统一的编程接口来支持各种 NAS 算法,当前处于试验阶段。 这意味着当前编程接口将来会有变化。 +| 名称 | 算法简介 | +| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [SPOS](SPOS.md) | 论文 [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https://arxiv.org/abs/1904.00420) 构造了一个采用统一的路径采样方法来训练简化的超网络,并使用进化算法来提高搜索神经网络结构的效率。 | -### 编程接口 +```eval_rst +.. 注意:SPOS 是一种两阶段算法,第一阶段是 one-shot,第二阶段是分布式的,利用第一阶段的结果作为检查点。 +``` + +## 使用 NNI API 在两种场景下需要用于设计和搜索模型的编程接口。 1. 在设计神经网络时,可能在层、子模型或连接上有多种选择,并且无法确定是其中一种或某些的组合的结果最好。 因此,需要简单的方法来表达候选的层或子模型。 2. 在神经网络上应用 NAS 时,需要统一的方式来表达架构的搜索空间,这样不必为不同的搜索算法来更改代码。 -NNI 提出的 API 在[这里](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch)。 [这里](https://github.com/microsoft/nni/tree/master/examples/nas/naive)包含了基于此 API 的 NAS 实现示例。 +[这里](./NasGuide.md)是在 NNI 上开始使用 NAS 的用户指南。 + +## 参考和反馈 -## **参考和反馈** * 在 GitHub 中[提交此功能的 Bug](https://github.com/microsoft/nni/issues/new?template=bug-report.md); -* 在 GitHub 中[提交新功能或改进请求](https://github.com/microsoft/nni/issues/new?template=enhancement.md); -* 了解 NNI 中[特征工程的更多信息](https://github.com/microsoft/nni/blob/master/docs/zh_CN/FeatureEngineering/Overview.md); -* 了解 NNI 中[模型自动压缩的更多信息](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Compressor/Overview.md); -* 了解如何[使用 NNI 进行超参数调优](https://github.com/microsoft/nni/blob/master/docs/zh_CN/Tuner/BuiltinTuner.md); +* 在 GitHub 中[提交新功能或改进请求](https://github.com/microsoft/nni/issues/new?template=enhancement.md)。 \ No newline at end of file diff --git a/docs/zh_CN/NAS/Proxylessnas.md b/docs/zh_CN/NAS/Proxylessnas.md new file mode 100644 index 0000000000..c5ca64b05d --- /dev/null +++ b/docs/zh_CN/NAS/Proxylessnas.md @@ -0,0 +1,63 @@ +# NNI 上的 ProxylessNAS + +## 介绍 + +论文 [ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https://arxiv.org/pdf/1812.00332.pdf) 去掉了代理,直接从大规模目标任务和目标硬件平台上学习架构。 它解决了可微分 NAS 大量内存消耗的问题,从而将计算成本较低到普通训练的水平,同时仍然能使用大规模的候选集。 参考论文了解详情。 + +## 用法 + +要使用 ProxylessNAS 训练、搜索方法,用户要在模型中使用 [NNI NAS interface](NasGuide.md) 来指定搜索空间,例如,`LayerChoice`,`InputChoice`。 定义并实例化模型,然后实例化 ProxylessNasTrainer,并将模型传入,剩下的工作由 Trainer 来完成。 +```python +trainer = ProxylessNasTrainer(model, + model_optim=optimizer, + train_loader=data_provider.train, + valid_loader=data_provider.valid, + device=device, + warmup=True, + ckpt_path=args.checkpoint_path, + arch_path=args.arch_path) +trainer.train() +trainer.export(args.arch_path) +``` +[此处](https://github.com/microsoft/nni/tree/master/examples/nas/proxylessnas)是完整示例。 + +**ProxylessNasTrainer 的输入参数** + +* **model** (*PyTorch 模型, 必需*) - 需要调优、搜索的模型。 它具有可变项以指定搜索空间。 +* **model_optim** (*PyTorch 优化器, 必需*) - 训练模型所需要的优化器。 +* **device** (*device, 必需*) - 用于训练、搜索的 device。 Trainer 会使用数据并行化。 +* **train_loader** (*PyTorch DataLoader, 必需*) - 训练数据集的 DataLoader。 +* **valid_loader** (*PyTorch DataLoader, 必需*) - 验证数据集的 DataLoader。 +* **label_smoothing** (*float, 可选, 默认为 0.1*) - 标签平滑度。 +* **n_epochs** (*int, 可选, 默认为 120*) - 训练、搜索的 Epoch 数量。 +* **init_lr** (*float, 可选, 默认为 0.025*) - 训练的初始学习率。 +* **binary_mode** (*'two', 'full', 或 'full_v2', 可选, 默认为 'full_v2'*) - Mutabor 中二进制权重的 forward, backword 模式。 'full' 表示前向传播所有候选操作,'two' 表示仅前向传播两个采样操作,'full_v2' 表示在反向传播时重新计算非激活的操作。 +* **arch_init_type** (*'normal' 或 'uniform', 可选, 默认为 'normal'*) - 初始化架构参数的方法。 +* **arch_init_ratio** (*float, 可选, 默认为 1e-3*) - 初始化架构参数的比例。 +* **arch_optim_lr** (*float, 可选, 默认为 1e-3*) - 架构参数优化器的学习率。 +* **arch_weight_decay** (*float, 可选, 默认为 0*) - 架构参数优化器的权重衰减。 +* **grad_update_arch_param_every** (*int, 可选, 默认为 5*) - 多少个迷你批处理后更新权重。 +* **grad_update_steps** (*int, 可选, 默认为 1*) - 在每次权重更新时,训练架构权重的次数。 +* **warmup** (*bool, 可选, 默认为 True*) - 是否需要热身。 +* **warmup_epochs** (*int, 可选, 默认为 25*) - 热身的 Epoch 数量。 +* **arch_valid_frequency** (*int, 可选, 默认为 = 1*) - 输出验证集结果的频率。 +* **load_ckpt** (*bool, 可选, 默认为 False*) - 是否加载检查点。 +* **ckpt_path** (*str, 可选, 默认为 None*) - 检查点路径。如果 load_ckpt 为 True,ckpt_path 不能为 None。 +* **arch_path** (*str, 可选, 默认为 None*) - 选择架构的路径。 + + +## 实现 + +NNI 上的实现基于[官方实现](https://github.com/mit-han-lab/ProxylessNAS)。 官方实现支持两种搜索方法:梯度下降和强化学习,还支持不同的硬件,包括 'mobile', 'cpu', 'gpu8', 'flops'。 在当前的 NNI 实现中,支持梯度下降训练方法,不支持不同的硬件。 完整支持正在进行中。 + +下面将介绍实现的细节。 像 NNI 上其它 one-shot NAS 算法一样,ProxylessNAS 由两部分组成:*搜索空间* 和 *训练方法*。 为了用户能灵活的定义自己的搜索空间,并使用内置的 ProxylessNAS 训练方法,将使用 [NNI NAS 接口](NasGuide.md)定制的搜索空间放在了[示例代码](https://github.com/microsoft/nni/tree/master/examples/nas/proxylessnas)中,并将搜索方法放在了 [SDK](https://github.com/microsoft/nni/tree/master/src/sdk/pynni/nni/nas/pytorch/proxylessnas) 中。 + +![](../../img/proxylessnas.png) + +ProxylessNAS 搜索方法由 ProxylessNasMutator 和 ProxylessNasTrainer 组成。 ProxylessNasMutator 为每个可变量初始化了 MixedOp (即, LayerChoice),并会在 MixedOp 中管理架构权重。 **对于数据并行化**,架构权重会在用户模型中。 具体地说,在 ProxylessNAS 视线中,为可变变量 (即, LayerChoice) 添加了 MixedOp 作为成员变量。 Mutator 也公开了两个成员函数:`arch_requires_grad` 和 `arch_disable_grad`,用于 Trainer 来控制架构权重的训练。 + +ProxylessNasMutator 还实现了可变量的前向逻辑 (即, LayerChoice)。 + +## 重现结果 + +进行中... diff --git a/docs/zh_CN/Release.md b/docs/zh_CN/Release.md index eba498b2b6..1831171c82 100644 --- a/docs/zh_CN/Release.md +++ b/docs/zh_CN/Release.md @@ -229,7 +229,7 @@ ### 主要功能 -* [支持在 Windows 上使用 NNI](Tutorial/NniOnWindows.md) +* [支持在 Windows 上使用 NNI](Tutorial/InstallationWin.md) * NNI 可在 Windows 上使用本机模式 * [支持新的 Advisor: BOHB](Tuner/BohbAdvisor.md) * 支持新的 BOHB Advisor,这是一个健壮而有效的超参调优算法,囊括了贝叶斯优化和 Hyperband 的优点 diff --git a/docs/zh_CN/TrainingService/PaiMode.md b/docs/zh_CN/TrainingService/PaiMode.md index 019ba21737..6e3287accf 100644 --- a/docs/zh_CN/TrainingService/PaiMode.md +++ b/docs/zh_CN/TrainingService/PaiMode.md @@ -4,7 +4,31 @@ NNI 支持在 [OpenPAI](https://github.com/Microsoft/pai) (简称 pai)上运 ## 设置环境 -参考[指南](../Tutorial/QuickStart.md)安装 NNI。 +步骤 1. 参考[指南](../Tutorial/QuickStart.md)安装 NNI。 + +步骤 2. 获取 OpenPAI 的令牌。 +点击 OpenPAI 界面右上方的 `My profile` 按钮。 ![](../../img/pai_token_button.jpg) 找到 token management,复制当前账号的令牌。 ![](../../img/pai_token_profile.jpg) + +步骤 3. 将 NFS 存储挂载到本机。 +点击 OpenPAI 网站的 `Submit job` 按钮。 ![](../../img/pai_job_submission_page.jpg) +在作业提交页面找到数据管理区域。 ![](../../img/pai_data_management_page.jpg) +`DEFAULT_STORAGE` 字段是在作业运行起来后,OpenPAI 容器中挂载的路径。 `Preview container paths` 是 API 提供的 NFS 主机和路径,需要将对应的位置挂载到本机,然后 NNI 才能使用 NFS 存储。 +例如,使用下列命令: + + sudo mount nfs://gcr-openpai-infra02:/pai/data /local/mnt + + +然后容器中的 `/data` 路径会被挂载到本机的 `/local/mnt` 文件夹 +然后在 NNI 的配置文件中如下配置: + + nniManagerNFSMountPath: /local/mnt + containerNFSMountPath: /data + + +步骤 4. 获取 OpenPAI 存储插件名称。 联系 OpenPAI 管理员,获得 NFS 存储插件的名称。 默认存储的名称是 `teamwise_storage`,NNI 配置文件中的配置如下: + + paiStoragePlugin: teamwise_storage + ## 运行 Experiment @@ -39,6 +63,7 @@ trial: virtualCluster: default nniManagerNFSMountPath: /home/user/mnt containerNFSMountPath: /mnt/data/user + paiStoragePlugin: team_wise # 配置要访问的 OpenPAI 集群 paiConfig: userName: your_pai_nni_user @@ -51,12 +76,12 @@ paiConfig: 与[本机模式](LocalMode.md),以及[远程计算机模式](RemoteMachineMode.md)相比,pai 模式的 Trial 需要额外的配置: * cpuNum - * 必填。 Trial 程序的 CPU 需求,必须为正数。 + * 可选。 Trial 程序的 CPU 需求,必须为正数。 如果没在 Trial 配置中设置,则需要在 `paiConfigPath` 指定的配置文件中设置。 * memoryMB - * 必填。 Trial 程序的内存需求,必须为正数。 + * 可选。 Trial 程序的内存需求,必须为正数。 如果没在 Trial 配置中设置,则需要在 `paiConfigPath` 指定的配置文件中设置。 * image - * 必填。 在 pai 模式中,Trial 程序由 OpenPAI 在 [Docker 容器](https://www.docker.com/)中安排运行。 此键用来指定 Trial 程序的容器使用的 Docker 映像。 - * [Docker Hub](https://hub.docker.com/) 上有预制的 NNI Docker 映像 [nnimsra/nni](https://hub.docker.com/r/msranni/nni/)。 它包含了用来启动 NNI Experiment 所依赖的所有 Python 包,Node 模块和 JavaScript。 生成此 Docker 映像的文件在[这里](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile)。 可以直接使用此映像,或参考它来生成自己的映像。 + * 可选。 在 pai 模式中,Trial 程序由 OpenPAI 在 [Docker 容器](https://www.docker.com/)中安排运行。 此键用来指定 Trial 程序的容器使用的 Docker 映像。 + * [Docker Hub](https://hub.docker.com/) 上有预制的 NNI Docker 映像 [nnimsra/nni](https://hub.docker.com/r/msranni/nni/)。 它包含了用来启动 NNI Experiment 所依赖的所有 Python 包,Node 模块和 JavaScript。 生成此 Docker 映像的文件在[这里](https://github.com/Microsoft/nni/tree/master/deployment/docker/Dockerfile)。 可以直接使用此映像,或参考它来生成自己的映像。 如果没在 Trial 配置中设置,则需要在 `paiConfigPath` 指定的配置文件中设置。 * virtualCluster * 可选。 设置 OpenPAI 的 virtualCluster,即虚拟集群。 如果未设置此参数,将使用默认(default)虚拟集群。 * nniManagerNFSMountPath @@ -64,7 +89,9 @@ paiConfig: * containerNFSMountPath * 必填。 在 OpenPAI 的容器中设置挂载路径。 * paiStoragePlugin - * 必填。 设置 PAI 中使用的存储插件的名称。 + * 可选。 设置 PAI 中使用的存储插件的名称。 如果没在 Trial 配置中设置,则需要在 `paiConfigPath` 指定的配置文件中设置。 +* paiConfigPath + * 可选。 设置 OpenPAI 作业配置文件路径,文件为 YAML 格式。 完成并保存 NNI Experiment 配置文件后(例如可保存为:exp_pai.yml),运行以下命令: diff --git a/docs/zh_CN/TrainingService/PaiYarnMode.md b/docs/zh_CN/TrainingService/PaiYarnMode.md index 0f930967a2..8287228097 100644 --- a/docs/zh_CN/TrainingService/PaiYarnMode.md +++ b/docs/zh_CN/TrainingService/PaiYarnMode.md @@ -6,7 +6,7 @@ 参考[指南](../Tutorial/QuickStart.md)安装 NNI。 ## 运行 Experiment -以 `examples/trials/mnist-annotation` 为例。 NNI 的 YAML 配置文件如下: +以 `examples/trials/mnist-tfv1` 为例。 NNI 的 YAML 配置文件如下: ```yaml authorName: your_name @@ -22,14 +22,14 @@ trainingServicePlatform: paiYarn # 搜索空间文件 searchSpacePath: search_space.json # 可选项: true, false -useAnnotation: true +useAnnotation: false tuner: builtinTunerName: TPE classArgs: optimize_mode: maximize trial: command: python3 mnist.py - codeDir: ~/nni/examples/trials/mnist-annotation + codeDir: ~/nni/examples/trials/mnist-tfv1 gpuNum: 0 cpuNum: 1 memoryMB: 8196 @@ -83,14 +83,14 @@ paiYarnConfig: portNumber: 1 ``` -NNI 支持 OpenPAIYarn 中的两种认证授权方法,即密码和 paiYarn Token,[参考](https://github.com/microsoft/paiYarn/blob/b6bd2ab1c8890f91b7ac5859743274d2aa923c22/docs/rest-server/API.md#2-authentication)。 授权配置在 `paiYarnConfig` 字段中。 密码认证的 `paiYarnConfig` 配置如下: +NNI 支持 OpenPAIYarn 中的两种认证授权方法,即密码和 paiYarn 令牌(token),[参考](https://github.com/microsoft/paiYarn/blob/b6bd2ab1c8890f91b7ac5859743274d2aa923c22/docs/rest-server/API.md#2-authentication)。 授权配置在 `paiYarnConfig` 字段中。 密码认证的 `paiYarnConfig` 配置如下: ``` paiYarnConfig: userName: your_paiYarn_nni_user passWord: your_paiYarn_password host: 10.1.1.1 ``` -Token 认证的 `paiYarnConfig` 配置如下: +令牌认证的 `paiYarnConfig` 配置如下: ``` paiYarnConfig: userName: your_paiYarn_nni_user diff --git a/docs/zh_CN/Tutorial/ExperimentConfig.md b/docs/zh_CN/Tutorial/ExperimentConfig.md index 590f966439..79c7362984 100644 --- a/docs/zh_CN/Tutorial/ExperimentConfig.md +++ b/docs/zh_CN/Tutorial/ExperimentConfig.md @@ -661,9 +661,9 @@ OpenPAI 帐户的密码。 #### token -如果使用 token 身份验证,则需要。 字符串。 +如果使用令牌(token)身份验证,则需要。 字符串。 -可以从 OpenPAI 门户检索的个人访问 token。 +可以从 OpenPAI 门户检索的个人访问令牌。 #### host diff --git a/docs/zh_CN/Tutorial/FAQ.md b/docs/zh_CN/Tutorial/FAQ.md index 7577248612..8cc286d4e7 100644 --- a/docs/zh_CN/Tutorial/FAQ.md +++ b/docs/zh_CN/Tutorial/FAQ.md @@ -54,7 +54,7 @@ nnictl 在执行时,使用 tmp 目录作为临时目录来复制 codeDir 下 ### NNI 在 Windows 上的问题 -参考 [Windows 上使用 NNI](NniOnWindows.md)。 +参考 [Windows 上的 NNI](InstallationWin.md#FAQ) ### 更多常见问题解答 diff --git a/docs/zh_CN/Tutorial/HowToUseDocker.md b/docs/zh_CN/Tutorial/HowToUseDocker.md index 918cf0a25f..a584c42b14 100644 --- a/docs/zh_CN/Tutorial/HowToUseDocker.md +++ b/docs/zh_CN/Tutorial/HowToUseDocker.md @@ -38,7 +38,7 @@ 如果你直接使用NNI的官方镜像`msranni/nni`来启动实验,你可以直接使用`nnictl`命令。 NNI的官方镜像有最基础的python环境和深度学习框架。 -如果你使用你自己的docker镜像,你首先需要安装NNI环境。[参考](Installation.md) +如果使用自己的 Docker 镜像,需要首先[安装 NNI](InstallationLinux.md)。 如果你想要使用NNI的官方例子,你可以通过以下git命令来克隆NNI: diff --git a/docs/zh_CN/Tutorial/InstallationLinux.md b/docs/zh_CN/Tutorial/InstallationLinux.md new file mode 100644 index 0000000000..aa48498c4e --- /dev/null +++ b/docs/zh_CN/Tutorial/InstallationLinux.md @@ -0,0 +1,119 @@ +# 在 Linux 和 Mac 下安装 + +## 安装 + +在 Linux 和 macOS 上安装,遵循以下相同的说明。 + +### 通过 pip 命令安装 NNI + +先决条件:`python 64-bit >= 3.5` + + bash + python3 -m pip install --upgrade nni + +### 通过源代码安装 NNI + +如果对某个或最新版本的代码感兴趣,可通过源代码安装 NNI。 + +先决条件:`python 64-bit >=3.5`, `git`, `wget` + + bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + cd nni + ./install.sh + +### 在 Docker 映像中使用 NNI + +也可将 NNI 安装到 docker 映像中。 参考[这里](../deployment/docker/README.md)来生成 NNI 的 Docker 映像。 也可通过此命令从 Docker Hub 中直接拉取 NNI 的映像 `docker pull msranni/nni:latest`。 + +## 验证安装 + +以下示例基于 TensorFlow 1.x 。确保运行环境中使用的的是 ** TensorFlow 1.x**。 + +* 通过克隆源代码下载示例。 + + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` + +* 运行 MNIST 示例。 + + ```bash + nnictl create --config nni/examples/trials/mnist-tfv1/config.yml + ``` + +* 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 通过命令行输出的 `Web UI url` 来访问 Experiment 的界面。 + +```text +INFO: Starting restful server... +INFO: Successfully started Restful server! +INFO: Setting local config... +INFO: Successfully set local config! +INFO: Starting experiment... +INFO: Successfully started experiment! +----------------------------------------------------------------------- +The experiment id is egchD4qy +The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 +----------------------------------------------------------------------- + +You can use these commands to get more information about the experiment +----------------------------------------------------------------------- + commands description + +1. nnictl experiment show show the information of experiments +2. nnictl trial ls list all of trial jobs +3. nnictl top monitor the status of running experiments +4. nnictl log stderr show stderr log content +5. nnictl log stdout show stdout log content +6. nnictl stop stop an experiment +7. nnictl trial kill kill a trial job by id +8. nnictl --help get help information about nnictl +----------------------------------------------------------------------- +``` + +* 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](../Tutorial/WebUI.md)的更多页面。 + +![概述](../../img/webui_overview_page.png) + +![详细说明](../../img/webui_trialdetail_page.png) + +## 系统需求 + +由于程序变更,NNI 的最低配置会有所更改。 + +### Linux + +| | 推荐配置 | 最低配置 | +| -------- | ----------------------------------------- | ------------------------------------- | +| **操作系统** | Ubuntu 16.04 或以上版本 | | +| **CPU** | Intel® Core™ i5 或 AMD Phenom™ II X3 或更高配置 | Intel® Core™ i3 或 AMD Phenom™ X3 8650 | +| **GPU** | NVIDIA® GeForce® GTX 660 或更高配置 | NVIDIA® GeForce® GTX 460 | +| **内存** | 6 GB | 4 GB | +| **存储** | 30 GB 可用的磁盘空间 | | +| **网络** | 宽带连接 | | +| **分辨率** | 1024 x 768 以上 | | + +### macOS + +| | 推荐配置 | 最低配置 | +| -------- | ------------------------ | -------------------------------------------------- | +| **操作系统** | macOS 10.14.1 或更高版本 | | +| **CPU** | Intel® Core™ i7-4770 或更高 | Intel® Core™ i5-760 或更高 | +| **GPU** | AMD Radeon™ R9 M395X 或更高 | NVIDIA® GeForce® GT 750M 或 AMD Radeon™ R9 M290 或更高 | +| **内存** | 8 GB | 4 GB | +| **存储** | 70GB 可用空间 SSD 硬盘 | 70GB 可用空间及 7200 RPM 硬盘 | +| **网络** | 宽带连接 | | +| **分辨率** | 1024 x 768 以上 | | + +## 更多 + +* [概述](../Overview.md) +* [使用命令行工具 nnictl](Nnictl.md) +* [使用 NNIBoard](WebUI.md) +* [定制搜索空间](SearchSpaceSpec.md) +* [配置 Experiment](ExperimentConfig.md) +* [如何在本机运行 Experiment (支持多 GPU 卡)?](../TrainingService/LocalMode.md) +* [如何在多机上运行 Experiment?](../TrainingService/RemoteMachineMode.md) +* [如何在 OpenPAI 上运行 Experiment?](../TrainingService/PaiMode.md) +* [如何通过 Kubeflow 在 Kubernetes 上运行 Experiment?](../TrainingService/KubeflowMode.md) +* [如何通过 FrameworkController 在 Kubernetes 上运行 Experiment?](../TrainingService/FrameworkControllerMode.md) \ No newline at end of file diff --git a/docs/zh_CN/Tutorial/InstallationWin.md b/docs/zh_CN/Tutorial/InstallationWin.md new file mode 100644 index 0000000000..59320870ec --- /dev/null +++ b/docs/zh_CN/Tutorial/InstallationWin.md @@ -0,0 +1,139 @@ +# 在 Windows 上安装 + +## 安装 + +强烈建议使用 Anaconda 或 Miniconda 来管理多个 Python 环境。 + +### 通过 pip 命令安装 NNI + + 先决条件:`python 64-bit >= 3.5` + + ```bash + python -m pip install --upgrade nni + ``` + +### 通过源代码安装 NNI + + 如果对某个或最新版本的代码感兴趣,可通过源代码安装 NNI。 + + 先决条件:`python 64-bit >=3.5`, `git`, `PowerShell` + + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + cd nni + powershell -ExecutionPolicy Bypass -file install.ps1 + ``` + +## 验证安装 + +以下示例基于 TensorFlow 1.x 。确保运行环境中使用的的是 **TensorFlow 1.x**。 + +* 通过克隆源代码下载示例。 + + ```bash + git clone -b v1.3 https://github.com/Microsoft/nni.git + ``` + +* 运行 MNIST 示例。 + + ```bash + nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml + ``` + + 注意:在其它示例中,如果 Python3 是通过 `python` 命令启动,需要将每个示例 YAML 文件的 Trial 命令中的 `python3` 改为 `python`。 + +* 在命令行中等待输出 `INFO: Successfully started experiment!`。 此消息表明 Experiment 已成功启动。 通过命令行输出的 `Web UI url` 来访问 Experiment 的界面。 + +```text +INFO: Starting restful server... +INFO: Successfully started Restful server! +INFO: Setting local config... +INFO: Successfully set local config! +INFO: Starting experiment... +INFO: Successfully started experiment! +----------------------------------------------------------------------- +The experiment id is egchD4qy +The Web UI urls are: http://223.255.255.1:8080 http://127.0.0.1:8080 +----------------------------------------------------------------------- + +You can use these commands to get more information about the experiment +----------------------------------------------------------------------- + commands description +1. nnictl experiment show show the information of experiments +2. nnictl trial ls list all of trial jobs +3. nnictl top monitor the status of running experiments +4. nnictl log stderr show stderr log content +5. nnictl log stdout show stdout log content +6. nnictl stop stop an experiment +7. nnictl trial kill kill a trial job by id +8. nnictl --help get help information about nnictl +----------------------------------------------------------------------- +``` + +* 在浏览器中打开 `Web UI url`,可看到下图的 Experiment 详细信息,以及所有的 Trial 任务。 查看[这里](../Tutorial/WebUI.md)的更多页面。 + +![概述](../../img/webui_overview_page.png) + +![详细说明](../../img/webui_trialdetail_page.png) + +## 系统需求 + +以下是 NNI 在 Windows 上的最低配置,推荐使用 Windows 10 1809 版。 由于程序变更,NNI 的最低配置会有所更改。 + +| | 推荐配置 | 最低配置 | +| -------- | ----------------------------------------- | ------------------------------------- | +| **操作系统** | Windows 10 1809 或更高版本 | | +| **CPU** | Intel® Core™ i5 或 AMD Phenom™ II X3 或更高配置 | Intel® Core™ i3 或 AMD Phenom™ X3 8650 | +| **GPU** | NVIDIA® GeForce® GTX 660 或更高配置 | NVIDIA® GeForce® GTX 460 | +| **内存** | 6 GB | 4 GB | +| **存储** | 30 GB 可用的磁盘空间 | | +| **网络** | 宽带连接 | | +| **分辨率** | 1024 x 768 以上 | | + +## 常见问答 + +### 安装 NNI 时出现 simplejson 错误 + +确保安装了 C++ 14.0 编译器。 +> building 'simplejson._speedups' extension error: [WinError 3] The system cannot find the path specified + +### 在命令行或 PowerShell 中,Trial 因为缺少 DLL 而失败 + +此错误因为缺少 LIBIFCOREMD.DLL 和 LIBMMD.DLL 文件,且 SciPy 安装失败。 使用 Anaconda 或 Miniconda 和 Python(64位)可解决。 +> ImportError: DLL load failed + +### Web 界面上的 Trial 错误 + +检查 Trial 日志文件来了解详情。 + +如果存在 stderr 文件,也需要查看其内容。 可能的错误情况包括: + +* 忘记将 Experiment 配置的 Trial 命令中的 `python3` 改为 `python`。 +* 忘记安装 Experiment 的依赖,如 TensorFlow,Keras 等。 + +### 无法在 Windows 上使用 BOHB +确保安装了 C ++ 14.0 编译器然后尝试运行 `nnictl package install --name=BOHB` 来安装依赖项。 + +### Windows 上不支持的 Tuner +当前不支持 SMAC,原因可参考[此问题](https://github.com/automl/SMAC3/issues/483)。 + +### 将 Windows 服务器用作远程服务器 +目前不支持。 + +注意: + +* 如果遇到 `Segmentation fault` 这样的错误,参考[常见问答](FAQ.md)。 + + +## 更多 + +* [概述](../Overview.md) +* [使用命令行工具 nnictl](Nnictl.md) +* [使用 NNIBoard](WebUI.md) +* [定制搜索空间](SearchSpaceSpec.md) +* [配置 Experiment](ExperimentConfig.md) +* [如何在本机运行 Experiment (支持多 GPU 卡)?](../TrainingService/LocalMode.md) +* [如何在多机上运行 Experiment?](../TrainingService/RemoteMachineMode.md) +* [如何在 OpenPAI 上运行 Experiment?](../TrainingService/PaiMode.md) +* [如何通过 Kubeflow 在 Kubernetes 上运行 Experiment?](../TrainingService/KubeflowMode.md) +* [如何通过 FrameworkController 在 Kubernetes 上运行 Experiment?](../TrainingService/FrameworkControllerMode.md) \ No newline at end of file diff --git a/docs/zh_CN/Tutorial/Nnictl.md b/docs/zh_CN/Tutorial/Nnictl.md index 38b66d314b..ff6f775a5c 100644 --- a/docs/zh_CN/Tutorial/Nnictl.md +++ b/docs/zh_CN/Tutorial/Nnictl.md @@ -44,12 +44,12 @@ nnictl 支持的命令: * 选项 - | 参数及缩写 | 是否必需 | 默认值 | 说明 | - | ------------ | ----- | --- | ---------------------- | - | --config, -c | True | | Experiment 的 YAML 配置文件 | - | --port, -p | False | | RESTful 服务的端口 | - | --debug, -d | False | | 设置为调试模式 | - | --watch, -w | False | | 启动为监视模式 | + | 参数及缩写 | 是否必需 | 默认值 | 说明 | + | ---------------- | ----- | --- | ---------------------- | + | --config, -c | True | | Experiment 的 YAML 配置文件 | + | --port, -p | False | | RESTful 服务的端口 | + | --debug, -d | False | | 设置为调试模式 | + | --foreground, -f | False | | 设为前台运行模式,将日志输出到终端 | * 示例 @@ -93,12 +93,12 @@ nnictl 支持的命令: * 选项 - | 参数及缩写 | 是否必需 | 默认值 | 说明 | - | ----------- | ----- | --- | -------------------------------- | - | id | True | | 要恢复的 Experiment 标识 | - | --port, -p | False | | 要恢复的 Experiment 使用的 RESTful 服务端口 | - | --debug, -d | False | | 设置为调试模式 | - | --watch, -w | False | | 启动为监视模式 | + | 参数及缩写 | 是否必需 | 默认值 | 说明 | + | ---------------- | ----- | --- | -------------------------------- | + | id | True | | 要恢复的 Experiment 标识 | + | --port, -p | False | | 要恢复的 Experiment 使用的 RESTful 服务端口 | + | --debug, -d | False | | 设置为调试模式 | + | --foreground, -f | False | | 设为前台运行模式,将日志输出到终端 | * 示例 diff --git a/docs/zh_CN/Tutorial/QuickStart.md b/docs/zh_CN/Tutorial/QuickStart.md index b886debf18..164636e536 100644 --- a/docs/zh_CN/Tutorial/QuickStart.md +++ b/docs/zh_CN/Tutorial/QuickStart.md @@ -20,7 +20,7 @@ * 在 Linux 和 macOS 上,如果要将 NNI 安装到当前用户的 home 目录中,可使用 `--user`,则不需要特殊权限。 * 如果遇到如`Segmentation fault` 这样的任何错误请参考[常见问题](FAQ.md)。 -* 参考[安装 NNI](Installation.md),来了解`系统需求`。 +* 有关 NNI 的`系统要求`,参考[在 Linux 和 macOS 上安装](InstallationLinux.md) 或 [Windows](InstallationWin.md)。 ## MNIST 上的 "Hello World" diff --git a/docs/zh_CN/builtin_assessor.rst b/docs/zh_CN/builtin_assessor.rst index a2df8f36c0..f108f7677d 100644 --- a/docs/zh_CN/builtin_assessor.rst +++ b/docs/zh_CN/builtin_assessor.rst @@ -1,6 +1,16 @@ 内置 Assessor ================= +为了节省计算资源,在 NNI 中可通过创建 **Assessor**,来配置提前终止策略。 + +Assessor 从 Trial 中接收中间结果,并通过指定的算法决定此 Trial 是否应该终止。 一旦 Trial 满足了提前终止策略(这表示 Assessor 认为最终结果不会太好),Assessor 会终止此 Trial,并将其状态标志为 `"EARLY_STOPPED"`。 + +这是 MNIST 在使用了 'Curvefitting' Assessor 的 'maximize' 模式后的实验结果,可以看到 Assessor 成功的将大量最终结果不好的 Trial **提前结束** 。 使用 Assessor,能在相同的计算资源下,得到更好的结果。 + +*实现代码:config_assessor.yml * + +.. image:: ../img/Assessor.png + .. toctree:: :maxdepth: 1 diff --git a/docs/zh_CN/builtin_tuner.rst b/docs/zh_CN/builtin_tuner.rst index 1963b6aff8..d0a3ff24e0 100644 --- a/docs/zh_CN/builtin_tuner.rst +++ b/docs/zh_CN/builtin_tuner.rst @@ -1,5 +1,10 @@ 内置 Tuner -================== +============== + +NNI 能用简单快速的方法来配置超参调优算法,称之为 **Tuner**。 + +Tuner 从 Trial 接收指标结果,来评估一组超参或网络结构的性能。 然后 Tuner 会将下一组超参或网络结构的配置发送给新的 Trial。 + .. toctree:: :maxdepth: 1 diff --git a/docs/zh_CN/feature_engineering.rst b/docs/zh_CN/feature_engineering.rst index 0ed0789909..738d0afd77 100644 --- a/docs/zh_CN/feature_engineering.rst +++ b/docs/zh_CN/feature_engineering.rst @@ -1,7 +1,8 @@ +################### 特征工程 -=================== +################### -很高兴的宣布 NNI 的特征工程包 Alpha 版本发布了。 +很高兴在 NNI 上引入了特征工程工具包, 其仍处于试验阶段,会根据使用反馈来演化。 诚挚邀请您使用、反馈,或更多贡献。 diff --git a/docs/zh_CN/hpo_advanced.rst b/docs/zh_CN/hpo_advanced.rst new file mode 100644 index 0000000000..55edf20c26 --- /dev/null +++ b/docs/zh_CN/hpo_advanced.rst @@ -0,0 +1,9 @@ +高级功能 +================= + +.. toctree:: + 启用多阶段 + 编写新的 Tuner + 编写新的 Assessor + 编写新的 Advisor + 编写新的训练平台 diff --git a/docs/zh_CN/hyperparameter_tune.rst b/docs/zh_CN/hyperparameter_tune.rst new file mode 100644 index 0000000000..c86223e940 --- /dev/null +++ b/docs/zh_CN/hyperparameter_tune.rst @@ -0,0 +1,27 @@ +############################# +自动(超参数)调优 +############################# + +自动调优是 NNI 提供的关键功能之一,主要应用场景是 +超参调优。 Trial 代码是需要被调优的,这里提供了一些常见的 +自动调优算法(称为 Tuner )和一些提前终止算法(称为 Assessor)。 +NNI 支持在各种培训平台上运行 Trial,例如,在本地计算机上运行, +在多台服务器上分布式运行,或在 OpenPAI,Kubernetes 等平台上。 + +NNI 的其它重要功能,例如模型压缩,特征工程,也可以进一步 +通过自动调优来提高,这会在介绍具体功能时提及。 + +NNI 具有高扩展性,高级用户可以定制自己的 Tuner、 Assessor,以及训练平台 +根据自己的需求。 + +.. toctree:: + :maxdepth: 2 + + 实现 Trial<./TrialExample/Trials> + Tuners + Assessors + 训练平台 + 示例 + Web 界面 + 如何调试 + 高级 \ No newline at end of file diff --git a/docs/zh_CN/index.rst b/docs/zh_CN/index.rst index 775cbb5f08..a4d61b4c29 100644 --- a/docs/zh_CN/index.rst +++ b/docs/zh_CN/index.rst @@ -2,9 +2,6 @@ Neural Network Intelligence ########################### -******** -内容 -******** .. toctree:: :caption: 目录 @@ -12,11 +9,14 @@ Neural Network Intelligence :titlesonly: 概述 + 安装 入门 - 教程 - 示例 + 自动(超参数)调优 + 神经网络架构搜索 + 模型压缩 + 特征工程 参考 - FAQ - 贡献 - 更改日志 - 社区分享 + 社区分享 + 常见问题 + 如何贡献 + 更改日志 \ No newline at end of file diff --git a/docs/zh_CN/installation.rst b/docs/zh_CN/installation.rst new file mode 100644 index 0000000000..8dcc7c1c94 --- /dev/null +++ b/docs/zh_CN/installation.rst @@ -0,0 +1,12 @@ +############ +安装 +############ + +当前支持在 Linux,macOS 和 Windows 下安装。 还可使用 Docker。 + +.. toctree:: + :maxdepth: 2 + + Linux 和 macOS + Windows + 使用 Docker \ No newline at end of file diff --git a/docs/zh_CN/model_compression.rst b/docs/zh_CN/model_compression.rst index 2e273a79eb..658b254326 100644 --- a/docs/zh_CN/model_compression.rst +++ b/docs/zh_CN/model_compression.rst @@ -16,13 +16,6 @@ NNI 中也内置了一些流程的模型压缩算法。 :maxdepth: 2 概述 - Level Pruner - AGP Pruner - L1Filter Pruner - Slim Pruner - Lottery Ticket Pruner - FPGM Pruner - Naive Quantizer - QAT Quantizer - DoReFa Quantizer + Pruner + Quantizer 自动模型压缩 diff --git a/docs/zh_CN/nas.rst b/docs/zh_CN/nas.rst index 611c5aefe2..abfd6ab76c 100644 --- a/docs/zh_CN/nas.rst +++ b/docs/zh_CN/nas.rst @@ -1,27 +1,28 @@ -############## -NAS 算法 -############## +########################## +神经网络架构搜索 +########################## 自动化的神经网络架构(NAS)搜索在寻找更好的模型方面发挥着越来越重要的作用。 -最近的研究工作证明了自动化 NAS 的可行性,并发现了一些超越手动设计和调整的模型。 -代表工作有 NASNet, ENAS, DARTS, Network Morphism, 以及 Evolution 等。 新的算法还在不断涌现。 +最近的研究工作证明了自动化 NAS 的可行性,并发现了一些超越手动调整的模型。 +代表工作有 NASNet, ENAS, DARTS, Network Morphism, 以及 Evolution 等。 此外,新的创新不断涌现。 -但是,要实现NAS算法需要花费大量的精力,并且很难在新算法中重用现有算法的代码。 +但是,要实现 NAS 算法需要花费大量的精力,并且很难在新算法中重用现有算法的代码。 为了促进 NAS 创新 (如, 设计实现新的 NAS 模型,比较不同的 NAS 模型), 易于使用且灵活的编程接口非常重要。 -以此为出发点,我们的目标是在 NNI 中提供统一的架构, +因此,我们为 NAS 提供了统一的接口, 来加速 NAS 创新,并更快的将最先进的算法用于现实世界的问题上。 - 详细信息,参考以下教程: .. toctree:: :maxdepth: 2 概述 - NAS 接口 + 教程 ENAS DARTS P-DARTS SPOS CDARTS + ProxylessNAS + API 参考 diff --git a/docs/zh_CN/reference.rst b/docs/zh_CN/reference.rst index 1995f6e6f6..ee455292a0 100644 --- a/docs/zh_CN/reference.rst +++ b/docs/zh_CN/reference.rst @@ -2,12 +2,11 @@ ================== .. toctree:: - :maxdepth: 3 + :maxdepth: 2 - 命令行 - Python API - Annotation - 配置 + nnictl 命令 + Experiment 配置 搜索空间 - 实现训练平台 - Framework Library + NNI Annotation + SDK API 参考 + 支持的框架和库 diff --git a/examples/model_compress/speedup_zh_CN.md b/examples/model_compress/speedup_zh_CN.md new file mode 100644 index 0000000000..90726bca79 --- /dev/null +++ b/examples/model_compress/speedup_zh_CN.md @@ -0,0 +1,96 @@ +# 加速掩码的模型 + +*此功能还处于预览版。* + +## 介绍 + +剪枝算法通常都用权重掩码来模拟实际的剪枝。 掩码可以用来检查某个剪枝(或稀疏)算法的模型性能,但还没有真正加速。 模型加速才是模型剪枝的最终目标。因此提供了此工具,来帮助基于用户提供的掩码(掩码来自于剪枝算法),将已有模型转换成小模型。 + +有两种剪枝算法。 一种是细粒度的剪枝,不改变权重形状,和输入输出的张量。 稀疏内核会被用来加速细粒度剪枝的层。 另一类是粗粒度的剪枝(例如,通道),通常,权重形状,输入输出张量会有所改变。 要加速这类剪枝算法,不需要使用系数内核,只需要用更小的层来替换。 由于开源社区中对稀疏内核的支持还比较有限,当前仅支持粗粒度剪枝,会在将来再支持细粒度的剪枝算法。 + +## 设计和实现 + +为了加速模型,被剪枝的层应该被替换掉,要么为粗粒度掩码使用较小的层,要么用稀疏内核来替换细粒度的掩码。 粗粒度掩码通常会改变权重的形状,或输入输出张量,因此,应该通过形状推断,来检查是否其它未被剪枝的层由于形状变化而需要改变形状。 因此,在设计中,主要有两个步骤:第一,做形状推理,找出所有应该替换的模块;第二,替换模块。 第一步需要模型的拓扑(即连接),我们使用了 `jit.trace` 来获取 PyTorch 的模型图。 + +对于每个模块,要准备四个函数,三个用于形状推理,一个用于模块替换。 三个形状推理函数是:给定权重形状推断输入/输出形状,给定输入形状推断权重/输出形状,给定输出形状推断权重/输入形状。 模块替换功能返回一个较小的新创建的模块。 + +## 用法 + +```python +from nni.compression.speedup.torch import ModelSpeedup +# model: 要加速的模型 +# dummy_input: 模型的示输入,传给 `jit.trace` +# masks_file: 剪枝算法创建的掩码文件 +m_speedup = ModelSpeedup(model, dummy_input.to(device), masks_file) +m_speedup.speedup_model() +dummy_input = dummy_input.to(device) +start = time.time() +out = model(dummy_input) +print('elapsed time: ', time.time() - start) +``` +完整示例参考[这里](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py) + +注意:当前实现仅用于 torch 1.3.1 和 torchvision 0.4.2 + +## 局限性 + +由于每个模块需要 4 个函数用于形状推理和模块替换,因此工作量较大,当前仅实现了示例所需的函数。 如果要加速自己的模型,但当前不支持,欢迎贡献。 + +对于 PyTorch,仅提供了替换模块,如果是在 `forward` 中的函数,当前不支持。 一种解决方案是将函数变为 PyTorch 模块。 + +## 示例的加速结果 + +实验代码可在[这里](https://github.com/microsoft/nni/tree/master/examples/model_compress/model_speedup.py)找到。 + +### slim Pruner 示例 + +在一块 V100 GPU 上, 输入张量:`torch.randn(64, 3, 32, 32)` + +| 次数 | 掩码时延 | 加速后的时延 | +| -- | ------- | -------- | +| 1 | 0.01197 | 0.005107 | +| 2 | 0.02019 | 0.008769 | +| 4 | 0.02733 | 0.014809 | +| 8 | 0.04310 | 0.027441 | +| 16 | 0.07731 | 0.05008 | +| 32 | 0.14464 | 0.10027 | + +### fpgm Pruner 示例 + +在 CPU 上, 输入张量:`torch.randn(64, 1, 28, 28)`, 方差较大 + +| 次数 | 掩码时延 | 加速后的时延 | +| --- | ------- | -------- | +| 1 | 0.01383 | 0.01839 | +| 2 | 0.01167 | 0.003558 | +| 4 | 0.01636 | 0.01088 | +| 40 | 0.14412 | 0.08268 | +| 40 | 1.29385 | 0.14408 | +| 40 | 0.41035 | 0.46162 | +| 400 | 6.29020 | 5.82143 | + +### l1filter Pruner 示例 + +在一块 V100 GPU 上, 输入张量:`torch.randn(64, 3, 32, 32)` + +| 次数 | 掩码时延 | 加速后的时延 | +| -- | ------- | -------- | +| 1 | 0.01026 | 0.003677 | +| 2 | 0.01657 | 0.008161 | +| 4 | 0.02458 | 0.020018 | +| 8 | 0.03498 | 0.025504 | +| 16 | 0.06757 | 0.047523 | +| 32 | 0.10487 | 0.086442 | + +### APoZ Pruner 示例 + +在一块 V100 GPU 上, 输入张量:`torch.randn(64, 3, 32, 32)` + +| 次数 | 掩码时延 | 加速后的时延 | +| -- | ------- | -------- | +| 1 | 0.01389 | 0.004208 | +| 2 | 0.01628 | 0.008310 | +| 4 | 0.02521 | 0.014008 | +| 8 | 0.03386 | 0.023923 | +| 16 | 0.06042 | 0.046183 | +| 32 | 0.12421 | 0.087113 | \ No newline at end of file