Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
- Build PaddlePaddle for Android [Chinese] [English]
- Build PaddlePaddle for IOS [Chinese] [English]
- Build PaddlePaddle for Raspberry Pi3 [Chinese] [English]
- Build PaddlePaddle for PX2
- How to build PaddlePaddle mobile inference library with minimum size.
- Merge batch normalization before deploying the model to the mobile.
- Compress the model before deploying the model to the mobile.
- Merge model config and parameter files into one file.
- How to deploy int8 model in mobile inference with PaddlePaddle.
- Benchmark of Mobilenet
- Benchmark of ENet
- Benchmark of DepthwiseConvolution in PaddlePaddle
This tutorial is contributed by PaddlePaddle and licensed under the Apache-2.0 license.