You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
About Paddle's main work related to embedded engineering, after two weeks of discussion and research with @Xreki .
Mainly contains four parts:
Build Paddle for the embedded(mobile) device.
This part of the work has been completed 70%. Next, we will continue to improve the Paddle cmake for the embedded environment. Include build Paddle for ios. And build minimum size image for the mobile environment.
Deploy Paddle to the embedded(mobile) device.
Currently, on the mobile side, we mainly solve the problem of inference. Now Paddle capi is ready, and we have investigated the capi-based alexnet/vgg/resnet and other models inference implementation. The next step is to get some performance/memory benchmarks. As well as the real application model's inference implementation and performance analysis.
Paddle Performance
We estimate that the biggest problem on inference is the lack of performance. At present, there are already some deep optimization libraries(like NNPACK) for the deep learning algorithm(like cnn) on ARM. So, we need to investigate the performance of these libraries, and the method of import these libraries into Paddle. Also, we need to optimize some of the PADDLE code for ARM. In addition, there is a known problem that Paddle needs to optimize memory usage during inference.
Application
We have already started trying to apply Paddle to the face and ocr mobile inference. Subsequent problems and progress will also be updated here.
The text was updated successfully, but these errors were encountered:
About Paddle's main work related to embedded engineering, after two weeks of discussion and research with @Xreki .
Mainly contains four parts:
This part of the work has been completed 70%. Next, we will continue to improve the Paddle cmake for the embedded environment. Include build Paddle for ios. And build minimum size image for the mobile environment.
Currently, on the mobile side, we mainly solve the problem of inference. Now Paddle capi is ready, and we have investigated the capi-based alexnet/vgg/resnet and other models inference implementation. The next step is to get some performance/memory benchmarks. As well as the real application model's inference implementation and performance analysis.
We estimate that the biggest problem on inference is the lack of performance. At present, there are already some deep optimization libraries(like NNPACK) for the deep learning algorithm(like cnn) on ARM. So, we need to investigate the performance of these libraries, and the method of import these libraries into Paddle. Also, we need to optimize some of the PADDLE code for ARM. In addition, there is a known problem that Paddle needs to optimize memory usage during inference.
We have already started trying to apply Paddle to the face and ocr mobile inference. Subsequent problems and progress will also be updated here.
The text was updated successfully, but these errors were encountered: