Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model] Add PIPNet and FaceLandmark1000 Support #548

Merged
merged 131 commits into from
Nov 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
131 commits
Select commit Hold shift + click to select a range
1684b05
first commit for yolov7
ziqi-jin Jul 13, 2022
71c00d9
pybind for yolov7
ziqi-jin Jul 14, 2022
21ab2f9
CPP README.md
ziqi-jin Jul 14, 2022
d63e862
CPP README.md
ziqi-jin Jul 14, 2022
7b3b0e2
modified yolov7.cc
ziqi-jin Jul 14, 2022
d039e80
README.md
ziqi-jin Jul 15, 2022
a34a815
python file modify
ziqi-jin Jul 18, 2022
eb010a8
merge test
ziqi-jin Jul 18, 2022
39f64f2
delete license in fastdeploy/
ziqi-jin Jul 18, 2022
d071b37
repush the conflict part
ziqi-jin Jul 18, 2022
d5026ca
README.md modified
ziqi-jin Jul 18, 2022
fb376ad
README.md modified
ziqi-jin Jul 18, 2022
4b8737c
file path modified
ziqi-jin Jul 18, 2022
ce922a0
file path modified
ziqi-jin Jul 18, 2022
6e00b82
file path modified
ziqi-jin Jul 18, 2022
8c359fb
file path modified
ziqi-jin Jul 18, 2022
906c730
file path modified
ziqi-jin Jul 18, 2022
80c1223
README modified
ziqi-jin Jul 18, 2022
6072757
README modified
ziqi-jin Jul 18, 2022
2c6e6a4
move some helpers to private
ziqi-jin Jul 18, 2022
48136f0
add examples for yolov7
ziqi-jin Jul 18, 2022
6feca92
api.md modified
ziqi-jin Jul 18, 2022
ae70d4f
api.md modified
ziqi-jin Jul 18, 2022
f591b85
api.md modified
ziqi-jin Jul 18, 2022
f0def41
YOLOv7
ziqi-jin Jul 18, 2022
15b9160
yolov7 release link
ziqi-jin Jul 18, 2022
4706e8c
yolov7 release link
ziqi-jin Jul 18, 2022
dc83584
yolov7 release link
ziqi-jin Jul 18, 2022
086debd
copyright
ziqi-jin Jul 18, 2022
4f980b9
change some helpers to private
ziqi-jin Jul 18, 2022
2e61c95
Merge branch 'develop' into develop
ziqi-jin Jul 19, 2022
80beadf
change variables to const and fix documents.
ziqi-jin Jul 19, 2022
8103772
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 19, 2022
f5f7a86
gitignore
ziqi-jin Jul 19, 2022
e6cec25
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e25e4f2
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e8a8439
Merge from develop (#9)
ziqi-jin Jul 20, 2022
a182893
first commit for yolor
ziqi-jin Jul 20, 2022
3aa015f
for merge
ziqi-jin Jul 20, 2022
d6b98aa
Develop (#11)
ziqi-jin Jul 20, 2022
871cfc6
Merge branch 'yolor' into develop
ziqi-jin Jul 20, 2022
013921a
Yolor (#16)
ziqi-jin Jul 21, 2022
7a5a6d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 21, 2022
c996117
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 22, 2022
0aefe32
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
2330414
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
4660161
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 27, 2022
033c18e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
6c94d65
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
85fb256
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 29, 2022
90ca4cb
add is_dynamic for YOLO series (#22)
ziqi-jin Jul 29, 2022
f6a4ed2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 1, 2022
3682091
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 3, 2022
ca1e110
Merge remote-tracking branch 'upstream/develop' into develop
ziqi-jin Aug 8, 2022
93ba6a6
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 9, 2022
767842e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
cc32733
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
2771a3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
a1e29ac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
5ecc6fe
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
2780588
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 12, 2022
c00be81
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
9082178
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
4b14f56
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
4876b82
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 16, 2022
9cebb1f
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 18, 2022
d1e3b29
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 19, 2022
69cf0d2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 22, 2022
2ff10e1
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 23, 2022
a673a2c
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 25, 2022
832d777
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 25, 2022
e513eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 29, 2022
ded2054
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 1, 2022
19db925
modify ppmatting backend and docs
ziqi-jin Sep 1, 2022
15be4a6
modify ppmatting docs
ziqi-jin Sep 1, 2022
3a5b93a
fix the PPMatting size problem
ziqi-jin Sep 3, 2022
f765853
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 3, 2022
c2332b0
fix LimitShort's log
ziqi-jin Sep 3, 2022
950f948
retrigger ci
ziqi-jin Sep 4, 2022
64a13c9
modify PPMatting docs
ziqi-jin Sep 4, 2022
09c073d
modify the way for dealing with LimitShort
ziqi-jin Sep 6, 2022
99969b6
Merge branch 'develop' into develop
jiangjiajun Sep 6, 2022
cf248de
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 8, 2022
9d4a4c9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 13, 2022
622fbf7
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 15, 2022
d1cf1ad
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 19, 2022
ff9a07e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 21, 2022
2707b03
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 22, 2022
896d1d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 8, 2022
25ee7e2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 12, 2022
79068d3
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 17, 2022
74b3ee0
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 21, 2022
a75c0c4
add python comments for external models
ziqi-jin Oct 21, 2022
985d273
modify resnet c++ comments
ziqi-jin Oct 21, 2022
e32a25c
modify C++ comments for external models
ziqi-jin Oct 21, 2022
8a73af6
modify python comments and add result class comments
ziqi-jin Oct 21, 2022
2aa7939
Merge branch 'develop' into doc_python
jiangjiajun Oct 22, 2022
887c53a
Merge branch 'develop' into doc_python
jiangjiajun Oct 23, 2022
963b9b9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 24, 2022
337e8c0
fix comments compile error
ziqi-jin Oct 24, 2022
d1d6890
modify result.h comments
ziqi-jin Oct 24, 2022
67234dd
Merge branch 'develop' into doc_python
jiangjiajun Oct 24, 2022
440e2a9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 24, 2022
ac35141
Merge branch 'doc_python' into develop
ziqi-jin Oct 24, 2022
3d83785
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 24, 2022
363a485
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 25, 2022
dc44eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 26, 2022
07717b4
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 26, 2022
33b4c62
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 27, 2022
f911f3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 1, 2022
ebb9365
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 2, 2022
0c60494
c++ version for FaceLandmark1000
ziqi-jin Nov 3, 2022
0ac31bd
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 7, 2022
b2c068e
add pipnet land1000 sigle test and python code
ziqi-jin Nov 9, 2022
84083c1
fix facelandmark1000 sigle test
ziqi-jin Nov 9, 2022
83c726f
fix python examples for PIPNet and FaceLandmark1000
ziqi-jin Nov 9, 2022
96e1783
fix examples links for PIPNet and FaceLandmark1000
ziqi-jin Nov 9, 2022
2d71fcf
modify test_vision_colorspace_convert.cc
ziqi-jin Nov 9, 2022
a2c30bd
modify facealign readme
ziqi-jin Nov 9, 2022
dce7000
retrigger ci
ziqi-jin Nov 10, 2022
f6a0f8e
modify README
ziqi-jin Nov 11, 2022
c7ee59c
test ci
ziqi-jin Nov 11, 2022
661a1ef
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 11, 2022
e6b28f2
Merge pull request #40 from ziqi-jin/develop
ziqi-jin Nov 11, 2022
87d0c26
fix download_prebuilt_libraries.md
ziqi-jin Nov 11, 2022
52e47d6
fix download_prebuilt_libraries.md
ziqi-jin Nov 11, 2022
03ae82d
modify for comments
ziqi-jin Nov 11, 2022
1960ee7
modify supported_num_landmarks
ziqi-jin Nov 13, 2022
6fc8b14
retrigger ci
ziqi-jin Nov 14, 2022
947fe9b
check code style
ziqi-jin Nov 16, 2022
20b3fa3
check code style
ziqi-jin Nov 16, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/en/build_and_install/download_prebuilt_libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
FastDeploy provides pre-built libraries for developers to download and install directly. Meanwhile, FastDeploy also offers easy access to compile so that developers can compile FastDeploy according to their own needs.

This article is divided into two parts:
- [1.GPU Deployment Environment](##GPU Deployment Environment)
- [2.CPU Deployment Environment](##CPU Deployment Environment)
- [1.GPU Deployment Environment](#gpu-deployment-environment)
- [2.CPU Deployment Environment](#cpu-deployment-environment)

## GPU Deployment Environment

Expand Down
2 changes: 2 additions & 0 deletions examples/vision/facealign/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,5 @@ FastDeploy目前支持如下人脸对齐(关键点检测)模型部署
| 模型 | 说明 | 模型格式 | 版本 |
| :--- | :--- | :------- | :--- |
| [Hsintao/pfld_106_face_landmarks](./pfld) | PFLD 系列模型 | ONNX | [CommitID:e150195](https://github.com/Hsintao/pfld_106_face_landmarks/commit/e150195) |
| [Single430/FaceLandmark1000](./face_landmark_1000) | FaceLandmark1000 系列模型 | ONNX | [CommitID:1a951b6](https://github.com/Single430/FaceLandmark1000/tree/1a951b6) |
| [jhb86253817/PIPNet](./pipnet) | PIPNet 系列模型 | ONNX | [CommitID:b9eab58](https://github.com/jhb86253817/PIPNet/tree/b9eab58) |
25 changes: 25 additions & 0 deletions examples/vision/facealign/face_landmark_1000/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# FaceLandmark 模型部署

## 模型版本说明

- [FaceLandmark1000](https://github.com/Single430/FaceLandmark1000/tree/1a951b6)

## 支持模型列表

目前FastDeploy支持如下模型的部署

- [FaceLandmark1000 模型](https://github.com/Single430/FaceLandmark1000)

## 下载预训练模型

为了方便开发者的测试,下面提供了FaceLandmark导出的各系列模型,开发者可直接下载使用。

| 模型 | 参数大小 | 精度 | 备注 |
|:---------------------------------------------------------------- |:----- |:----- | :------ |
| [FaceLandmark1000](https://bj.bcebos.com/paddlehub/fastdeploy/FaceLandmark1000.onnx) | 2.1M | - |


## 详细部署文档

- [Python部署](python)
- [C++部署](cpp)
18 changes: 18 additions & 0 deletions examples/vision/facealign/face_landmark_1000/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
PROJECT(infer_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)

# 指定下载解压后的fastdeploy库路径
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
include(${FASTDEPLOY_INSTALL_DIR}/utils/gflags.cmake)
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)

# 添加FastDeploy依赖头文件
include_directories(${FASTDEPLOY_INCS})

add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
# 添加FastDeploy库依赖
if(UNIX AND (NOT APPLE) AND (NOT ANDROID))
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS} gflags pthread)
else()
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS} gflags)
endif()
84 changes: 84 additions & 0 deletions examples/vision/facealign/face_landmark_1000/cpp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# FaceLandmark1000 C++部署示例

本目录下提供`infer.cc`快速完成FaceLandmark1000在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)

以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,保证 FastDeploy 版本0.7.0以上(x.x.x >= 0.7.0)支持FaceLandmark1000模型

```bash
mkdir build
cd build
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j

#下载官方转换好的 FaceLandmark1000 模型文件和测试图片
wget https://bj.bcebos.com/paddlehub/fastdeploy/FaceLandmark1000.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/facealign_input.png

# CPU推理
./infer_demo --model FaceLandmark1000.onnx --image facealign_input.png --device cpu
# GPU推理
./infer_demo --model FaceLandmark1000.onnx --image facealign_input.png --device gpu
# GPU上TensorRT推理
./infer_demo --model FaceLandmark1000.onnx --image facealign_input.png --device gpu --backend trt
```

运行完成可视化结果如下图所示

<div width="500">
<img width="470" height="384" float="left" src="https://user-images.githubusercontent.com/67993288/200761309-90c096e2-c2f3-4140-8012-32ed84e5f389.jpg">
</div>

以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)

## FaceLandmark1000 C++接口

### FaceLandmark1000 类

```c++
fastdeploy::vision::facealign::FaceLandmark1000(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
FaceLandmark1000模型加载和初始化,其中model_file为导出的ONNX模型格式。
**参数**
> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
#### Predict函数
> ```c++
> FaceLandmark1000::Predict(cv::Mat* im, FaceAlignmentResult* result)
> ```
>
> 模型预测接口,输入图像直接输出landmarks结果。
>
> **参数**
>
> > * **im**: 输入图像,注意需为HWC,BGR格式
> > * **result**: landmarks结果, FaceAlignmentResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
### 类成员变量

用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果

> > * **size**(vector&lt;int&gt;): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[128, 128]
- [模型介绍](../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
110 changes: 110 additions & 0 deletions examples/vision/facealign/face_landmark_1000/cpp/infer.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/vision.h"
#include "gflags/gflags.h"

DEFINE_string(model, "", "Directory of the inference model.");
DEFINE_string(image, "", "Path of the image file.");
DEFINE_string(device, "cpu",
"Type of inference device, support 'cpu' or 'gpu'.");
DEFINE_string(backend, "default",
"The inference runtime backend, support: ['default', 'ort', "
"'paddle', 'ov', 'trt', 'paddle_trt']");
DEFINE_bool(use_fp16, false, "Whether to use FP16 mode, only support 'trt' and 'paddle_trt' backend");

void PrintUsage() {
std::cout << "Usage: infer_demo --model model_path --image img_path --device [cpu|gpu] --backend "
"[default|ort|paddle|ov|trt|paddle_trt] "
"--use_fp16 false"
<< std::endl;
std::cout << "Default value of device: cpu" << std::endl;
std::cout << "Default value of backend: default" << std::endl;
std::cout << "Default value of use_fp16: false" << std::endl;
}

bool CreateRuntimeOption(fastdeploy::RuntimeOption* option) {
if (FLAGS_device == "gpu") {
option->UseGpu();
if (FLAGS_backend == "ort") {
option->UseOrtBackend();
} else if (FLAGS_backend == "paddle") {
option->UsePaddleBackend();
} else if (FLAGS_backend == "trt" ||
FLAGS_backend == "paddle_trt") {
option->UseTrtBackend();
option->SetTrtInputShape("input", {1, 3, 128, 128});
if (FLAGS_backend == "paddle_trt") {
option->EnablePaddleToTrt();
}
if (FLAGS_use_fp16) {
option->EnableTrtFP16();
}
} else if (FLAGS_backend == "default") {
return true;
} else {
std::cout << "While inference with GPU, only support default/ort/paddle/trt/paddle_trt now, " << FLAGS_backend << " is not supported." << std::endl;
return false;
}
} else if (FLAGS_device == "cpu") {
if (FLAGS_backend == "ort") {
option->UseOrtBackend();
} else if (FLAGS_backend == "ov") {
option->UseOpenVINOBackend();
} else if (FLAGS_backend == "paddle") {
option->UsePaddleBackend();
} else if (FLAGS_backend == "default") {
return true;
} else {
std::cout << "While inference with CPU, only support default/ort/ov/paddle now, " << FLAGS_backend << " is not supported." << std::endl;
return false;
}
} else {
std::cerr << "Only support device CPU/GPU now, " << FLAGS_device << " is not supported." << std::endl;
return false;
}

return true;
}

int main(int argc, char* argv[]) {
google::ParseCommandLineFlags(&argc, &argv, true);
auto option = fastdeploy::RuntimeOption();
if (!CreateRuntimeOption(&option)) {
PrintUsage();
return -1;
}

auto model = fastdeploy::vision::facealign::FaceLandmark1000(FLAGS_model, "", option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return -1;
}

auto im = cv::imread(FLAGS_image);
auto im_bak = im.clone();

fastdeploy::vision::FaceAlignmentResult res;
if (!model.Predict(&im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return -1;
}
std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::VisFaceAlignment(im_bak, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;

return 0;
}
71 changes: 71 additions & 0 deletions examples/vision/facealign/face_landmark_1000/python/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# FaceLandmark1000 Python部署示例

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)

本目录下提供`infer.py`快速完成FaceLandmark1000在CPU/GPU,以及GPU上通过TensorRT加速部署的示例,保证 FastDeploy 版本 >= 0.7.0 支持FaceLandmark1000模型。执行如下脚本即可完成

```bash
#下载部署示例代码
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/facealign/facelandmark1000/python

# 下载FaceLandmark1000模型文件和测试图片
## 原版ONNX模型
wget https://bj.bcebos.com/paddlehub/fastdeploy/FaceLandmark1000.onnx
wget https://bj.bcebos.com/paddlehub/fastdeploy/facealign_input.png

# CPU推理
python infer.py --model FaceLandmark1000.onnx --image facealign_input.png --device cpu
# GPU推理
python infer.py --model FaceLandmark1000.onnx --image facealign_input.png --device gpu
# TRT推理
python infer.py --model FaceLandmark1000.onnx --image facealign_input.png --device gpu --backend trt
```

运行完成可视化结果如下图所示

<div width="500">
<img width="470" height="384" float="left" src="https://user-images.githubusercontent.com/67993288/200761309-90c096e2-c2f3-4140-8012-32ed84e5f389.jpg">
</div>

## FaceLandmark1000 Python接口

```python
fd.vision.facealign.FaceLandmark1000(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX)
```

FaceLandmark1000模型加载和初始化,其中model_file为导出的ONNX模型格式

**参数**

> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
> * **model_format**(ModelFormat): 模型格式,默认为ONNX

### predict函数

> ```python
> FaceLandmark1000.predict(input_image)
> ```
>
> 模型预测结口,输入图像直接输出landmarks坐标结果。
>
> **参数**
>
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式

> **返回**
>
> > 返回`fastdeploy.vision.FaceAlignmentResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)


## 其它文档

- [FaceLandmark1000 模型介绍](..)
- [FaceLandmark1000 C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
Loading