Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Fix dead links #517

Merged
merged 118 commits into from
Nov 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
118 commits
Select commit Hold shift + click to select a range
1684b05
first commit for yolov7
ziqi-jin Jul 13, 2022
71c00d9
pybind for yolov7
ziqi-jin Jul 14, 2022
21ab2f9
CPP README.md
ziqi-jin Jul 14, 2022
d63e862
CPP README.md
ziqi-jin Jul 14, 2022
7b3b0e2
modified yolov7.cc
ziqi-jin Jul 14, 2022
d039e80
README.md
ziqi-jin Jul 15, 2022
a34a815
python file modify
ziqi-jin Jul 18, 2022
eb010a8
merge test
ziqi-jin Jul 18, 2022
39f64f2
delete license in fastdeploy/
ziqi-jin Jul 18, 2022
d071b37
repush the conflict part
ziqi-jin Jul 18, 2022
d5026ca
README.md modified
ziqi-jin Jul 18, 2022
fb376ad
README.md modified
ziqi-jin Jul 18, 2022
4b8737c
file path modified
ziqi-jin Jul 18, 2022
ce922a0
file path modified
ziqi-jin Jul 18, 2022
6e00b82
file path modified
ziqi-jin Jul 18, 2022
8c359fb
file path modified
ziqi-jin Jul 18, 2022
906c730
file path modified
ziqi-jin Jul 18, 2022
80c1223
README modified
ziqi-jin Jul 18, 2022
6072757
README modified
ziqi-jin Jul 18, 2022
2c6e6a4
move some helpers to private
ziqi-jin Jul 18, 2022
48136f0
add examples for yolov7
ziqi-jin Jul 18, 2022
6feca92
api.md modified
ziqi-jin Jul 18, 2022
ae70d4f
api.md modified
ziqi-jin Jul 18, 2022
f591b85
api.md modified
ziqi-jin Jul 18, 2022
f0def41
YOLOv7
ziqi-jin Jul 18, 2022
15b9160
yolov7 release link
ziqi-jin Jul 18, 2022
4706e8c
yolov7 release link
ziqi-jin Jul 18, 2022
dc83584
yolov7 release link
ziqi-jin Jul 18, 2022
086debd
copyright
ziqi-jin Jul 18, 2022
4f980b9
change some helpers to private
ziqi-jin Jul 18, 2022
2e61c95
Merge branch 'develop' into develop
ziqi-jin Jul 19, 2022
80beadf
change variables to const and fix documents.
ziqi-jin Jul 19, 2022
8103772
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 19, 2022
f5f7a86
gitignore
ziqi-jin Jul 19, 2022
e6cec25
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e25e4f2
Transfer some funtions to private member of class
ziqi-jin Jul 19, 2022
e8a8439
Merge from develop (#9)
ziqi-jin Jul 20, 2022
a182893
first commit for yolor
ziqi-jin Jul 20, 2022
3aa015f
for merge
ziqi-jin Jul 20, 2022
d6b98aa
Develop (#11)
ziqi-jin Jul 20, 2022
871cfc6
Merge branch 'yolor' into develop
ziqi-jin Jul 20, 2022
013921a
Yolor (#16)
ziqi-jin Jul 21, 2022
7a5a6d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 21, 2022
c996117
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 22, 2022
0aefe32
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
2330414
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 26, 2022
4660161
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 27, 2022
033c18e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
6c94d65
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 28, 2022
85fb256
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Jul 29, 2022
90ca4cb
add is_dynamic for YOLO series (#22)
ziqi-jin Jul 29, 2022
f6a4ed2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 1, 2022
3682091
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 3, 2022
ca1e110
Merge remote-tracking branch 'upstream/develop' into develop
ziqi-jin Aug 8, 2022
93ba6a6
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 9, 2022
767842e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
cc32733
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 10, 2022
2771a3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
a1e29ac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
5ecc6fe
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 11, 2022
2780588
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 12, 2022
c00be81
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
9082178
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
4b14f56
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 15, 2022
4876b82
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 16, 2022
9cebb1f
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 18, 2022
d1e3b29
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 19, 2022
69cf0d2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 22, 2022
2ff10e1
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 23, 2022
a673a2c
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 25, 2022
832d777
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 25, 2022
e513eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Aug 29, 2022
ded2054
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 1, 2022
19db925
modify ppmatting backend and docs
ziqi-jin Sep 1, 2022
15be4a6
modify ppmatting docs
ziqi-jin Sep 1, 2022
3a5b93a
fix the PPMatting size problem
ziqi-jin Sep 3, 2022
f765853
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 3, 2022
c2332b0
fix LimitShort's log
ziqi-jin Sep 3, 2022
950f948
retrigger ci
ziqi-jin Sep 4, 2022
64a13c9
modify PPMatting docs
ziqi-jin Sep 4, 2022
09c073d
modify the way for dealing with LimitShort
ziqi-jin Sep 6, 2022
99969b6
Merge branch 'develop' into develop
jiangjiajun Sep 6, 2022
cf248de
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 8, 2022
9d4a4c9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 13, 2022
622fbf7
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 15, 2022
d1cf1ad
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 19, 2022
ff9a07e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 21, 2022
2707b03
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Sep 22, 2022
896d1d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 8, 2022
25ee7e2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 12, 2022
79068d3
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 17, 2022
74b3ee0
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 21, 2022
a75c0c4
add python comments for external models
ziqi-jin Oct 21, 2022
985d273
modify resnet c++ comments
ziqi-jin Oct 21, 2022
e32a25c
modify C++ comments for external models
ziqi-jin Oct 21, 2022
8a73af6
modify python comments and add result class comments
ziqi-jin Oct 21, 2022
2aa7939
Merge branch 'develop' into doc_python
jiangjiajun Oct 22, 2022
887c53a
Merge branch 'develop' into doc_python
jiangjiajun Oct 23, 2022
963b9b9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 24, 2022
337e8c0
fix comments compile error
ziqi-jin Oct 24, 2022
d1d6890
modify result.h comments
ziqi-jin Oct 24, 2022
67234dd
Merge branch 'develop' into doc_python
jiangjiajun Oct 24, 2022
440e2a9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 24, 2022
ac35141
Merge branch 'doc_python' into develop
ziqi-jin Oct 24, 2022
3d83785
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 24, 2022
363a485
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 25, 2022
dc44eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 26, 2022
07717b4
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 26, 2022
33b4c62
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Oct 27, 2022
f911f3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 1, 2022
ebb9365
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 2, 2022
0ac31bd
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin Nov 7, 2022
8ced8e8
first commit for dead links
ziqi-jin Nov 7, 2022
0482aeb
first commit for dead links
ziqi-jin Nov 7, 2022
cf3291e
fix docs deadlinks
ziqi-jin Nov 7, 2022
304e313
fix docs deadlinks
ziqi-jin Nov 7, 2022
6dde17a
fix examples deadlinks
ziqi-jin Nov 7, 2022
afe52bf
fix examples deadlinks
ziqi-jin Nov 7, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/api_docs/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This directory help to generate Python API documents for FastDeploy.

1. First, to generate the latest api documents, you need to install the latest FastDeploy, refer [build and install](en/build_and_install) to build FastDeploy python wheel package with the latest code.
1. First, to generate the latest api documents, you need to install the latest FastDeploy, refer [build and install](../../cn/build_and_install) to build FastDeploy python wheel package with the latest code.
2. After installed FastDeploy in your python environment, there are some dependencies need to install, execute command `pip install -r requirements.txt` in this directory
3. Execute command `make html` to generate API documents

Expand Down
2 changes: 1 addition & 1 deletion docs/cn/build_and_install/android.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,4 +102,4 @@ make install
如何使用FastDeploy Android C++ SDK 请参考使用案例文档:
- [图像分类Android使用文档](../../../examples/vision/classification/paddleclas/android/README.md)
- [目标检测Android使用文档](../../../examples/vision/detection/paddledetection/android/README.md)
- [在 Android 通过 JNI 中使用 FastDeploy C++ SDK](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
- [在 Android 通过 JNI 中使用 FastDeploy C++ SDK](../../cn/faq/use_cpp_sdk_on_android.md)
2 changes: 1 addition & 1 deletion docs/cn/faq/use_sdk_on_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ D:\qiuyanjun\fastdeploy_test\infer_ppyoloe\x64\Release\infer_ppyoloe.exe
![image](https://user-images.githubusercontent.com/31974251/192144782-79bccf8f-65d0-4f22-9f41-81751c530319.png)

(2)其中infer_ppyoloe.cpp的代码可以直接从examples中的代码拷贝过来:
- [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc)
- [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc)

(3)CMakeLists.txt主要包括配置FastDeploy C++ SDK的路径,如果是GPU版本的SDK,还需要配置CUDA_DIRECTORY为CUDA的安装路径,CMakeLists.txt的配置如下:

Expand Down
8 changes: 4 additions & 4 deletions docs/en/faq/use_sdk_on_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ D:\qiuyanjun\fastdeploy_build\built\fastdeploy-win-x64-gpu-0.2.1\third_libs\inst

![image](https://user-images.githubusercontent.com/31974251/192827842-1f05d435-8a3e-492b-a3b7-d5e88f85f814.png)

Compile successfully, you can see the exe saved in:
Compile successfully, you can see the exe saved in:

```bat
D:\qiuyanjun\fastdeploy_test\infer_ppyoloe\x64\Release\infer_ppyoloe.exe
Expand Down Expand Up @@ -221,7 +221,7 @@ This section is for CMake users and describes how to create CMake projects in Vi
![image](https://user-images.githubusercontent.com/31974251/192144782-79bccf8f-65d0-4f22-9f41-81751c530319.png)

(2)The code of infer_ppyoloe.cpp can be copied directly from the code in examples:
- [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc)
- [examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc](../../../examples/vision/detection/paddledetection/cpp/infer_ppyoloe.cc)

(3)CMakeLists.txt mainly includes the configuration of the path of FastDeploy C++ SDK, if it is the GPU version of the SDK, you also need to configure CUDA_DIRECTORY as the installation path of CUDA, the configuration of CMakeLists.txt is as follows:

Expand Down Expand Up @@ -361,7 +361,7 @@ A brief description of the usage is as follows.
#### 4.1.2 fastdeploy_init.bat View all dll, lib and include paths in the SDK
<div id="CommandLineDeps12"></div>

Go to the root directory of the SDK and run the show command to view all the dll, lib and include paths in the SDK. In the following command, %cd% means the current directory (the root directory of the SDK).
Go to the root directory of the SDK and run the show command to view all the dll, lib and include paths in the SDK. In the following command, %cd% means the current directory (the root directory of the SDK).

```bat
D:\path-to-fastdeploy-sdk-dir>fastdeploy_init.bat show %cd%
Expand Down Expand Up @@ -504,7 +504,7 @@ copy /Y %FASTDEPLOY_HOME%\third_libs\install\yaml-cpp\lib\*.dll Release\
copy /Y %FASTDEPLOY_HOME%\third_libs\install\openvino\bin\*.dll Release\
copy /Y %FASTDEPLOY_HOME%\third_libs\install\openvino\bin\*.xml Release\
copy /Y %FASTDEPLOY_HOME%\third_libs\install\openvino\3rdparty\tbb\bin\*.dll Release\
```
```
Note that if you compile the latest SDK or version >0.2.1 by yourself, the opencv and openvino directory structure has changed and the path needs to be modified appropriately. For example:
```bat
copy /Y %FASTDEPLOY_HOME%\third_libs\install\opencv\build\x64\vc15\bin\*.dll Release\
Expand Down
2 changes: 1 addition & 1 deletion docs/en/quantize.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ FastDeploy基于PaddleSlim, 集成了一键模型量化的工具, 同时, FastDe

### 用户使用FastDeploy一键模型量化工具来量化模型
Fastdeploy基于PaddleSlim, 为用户提供了一键模型量化的工具,请参考如下文档进行模型量化.
- [FastDeploy 一键模型量化](../../tools/quantization/)
- [FastDeploy 一键模型量化](../../tools/auto_compression/)
当用户获得产出的量化模型之后,即可以使用FastDeploy来部署量化模型.


Expand Down
2 changes: 1 addition & 1 deletion examples/text/ernie-3.0/serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,4 +168,4 @@ entity: 华夏 label: LOC pos: [14, 15]

## 配置修改

当前分类任务(ernie_seqcls_model/config.pbtxt)默认配置在CPU上运行OpenVINO引擎; 序列标注任务默认配置在GPU上运行Paddle引擎。如果要在CPU/GPU或其他推理引擎上运行, 需要修改配置,详情请参考[配置文档](../../../../../serving/docs/zh_CN/model_configuration.md)
当前分类任务(ernie_seqcls_model/config.pbtxt)默认配置在CPU上运行OpenVINO引擎; 序列标注任务默认配置在GPU上运行Paddle引擎。如果要在CPU/GPU或其他推理引擎上运行, 需要修改配置,详情请参考[配置文档](../../../../serving/docs/zh_CN/model_configuration.md)
2 changes: 1 addition & 1 deletion examples/vision/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ FastDeploy针对飞桨的视觉套件,以及外部热门模型,提供端到
- 加载模型
- 调用`predict`接口

FastDeploy在各视觉模型部署时,也支持一键切换后端推理引擎,详情参阅[如何切换模型推理引擎](../../docs/runtime/how_to_change_backend.md)。
FastDeploy在各视觉模型部署时,也支持一键切换后端推理引擎,详情参阅[如何切换模型推理引擎](../../docs/cn/faq/how_to_change_backend.md)。
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)

## 以量化后的ResNet50_Vd模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)


## 以量化后的ResNet50_Vd模型为例, 进行部署
Expand Down
3 changes: 1 addition & 2 deletions examples/vision/classification/paddleclas/web/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

## 前端部署图像分类模型

图像分类模型web demo使用[**参考文档**](../../../../examples/application/js/web_demo)
图像分类模型web demo使用[**参考文档**](../../../../application/js/web_demo/)


## MobileNet js接口
Expand Down Expand Up @@ -34,4 +34,3 @@ console.log(res);

- [PaddleClas模型 python部署](../../paddleclas/python/)
- [PaddleClas模型 C++部署](../cpp/)

8 changes: 4 additions & 4 deletions examples/vision/classification/resnet/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/quick_start)
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)

以Linux上 ResNet50 推理为例,在本目录执行如下命令即可完成编译测试

Expand Down Expand Up @@ -33,7 +33,7 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
```

以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/compile/how_to_use_sdk_on_windows.md)
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)

## ResNet C++接口

Expand Down Expand Up @@ -74,4 +74,4 @@ fastdeploy::vision::classification::ResNet(
- [模型介绍](../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
6 changes: 3 additions & 3 deletions examples/vision/classification/resnet/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/environment.md)
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start)
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)

本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成

Expand Down Expand Up @@ -69,4 +69,4 @@ fd.vision.classification.ResNet(model_file, params_file, runtime_option=None, mo
- [ResNet 模型介绍](..)
- [ResNet C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)

## 以量化后的PP-YOLOE-l模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)


## 以量化后的PP-YOLOE-l模型为例, 进行部署
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov5/quantize/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.

## 以量化后的YOLOv5s模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov5/quantize/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.


## 以量化后的YOLOv5s模型为例, 进行部署
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov6/quantize/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.

## 以量化后的YOLOv6s模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov6/quantize/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.

## 以量化后的YOLOv6s模型为例, 进行部署
```bash
Expand Down
20 changes: 10 additions & 10 deletions examples/vision/detection/yolov7/python/README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ English | [简体中文](README.md)

Two steps before deployment:

- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/docs_en/environment.md)
- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/docs_en/quick_start)
- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)


This doc provides a quick `infer.py` demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command:
Expand All @@ -21,7 +21,7 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000

# CPU Inference
python infer.py --model yolov7.onnx --image 000000014439.jpg --device cpu
# GPU
# GPU
python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu
# GPU上使用TensorRT推理
python infer.py --model yolov7.onnx --image 000000014439.jpg --device gpu --use_trt True
Expand Down Expand Up @@ -51,18 +51,18 @@ YOLOv7 model loading and initialisation, with model_file being the exported ONNX
> ```python
> YOLOv7.predict(image_data, conf_threshold=0.25, nms_iou_threshold=0.5)
> ```
>
>
> Model prediction interface with direct output of detection results from the image input.
>
>
> **Parameters**
>
>
> > * **image_data**(np.ndarray): Input image. Images need to be in HWC or BGR format
> > * **conf_threshold**(float): Filter threshold for detection box confidence
> > * **nms_iou_threshold**(float): iou thresholds during NMS processing

> **Return**
>
> > Return to`fastdeploy.vision.DetectionResult`Struct. For more details, please refer to [Vision Model Results](../../../../../docs/docs_en/api/vision_results/)
>
> > Return to`fastdeploy.vision.DetectionResult`Struct. For more details, please refer to [Vision Model Results](../../../../../docs/api/vision_results/)

### Class Member Variables

Expand All @@ -80,5 +80,5 @@ Users can modify the following pre-processing parameters for their needs. This w

- [YOLOv7 Model Introduction](..)
- [YOLOv7 C++ Deployment](../cpp)
- [Vision Model Results](../../../../../docs/docs_en/api/vision_results/)
- [how to change inference backend](../../../../../docs/docs_en/runtime/how_to_change_inference_backend.md)
- [Vision Model Results](../../../../../docs/api/vision_results/)
- [how to change inference backend](../../../../../docs/en/faq/how_to_change_backend.md)
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov7/quantize/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.

## 以量化后的YOLOv7模型为例, 进行部署
在本目录执行如下命令即可完成编译,以及量化模型部署.
Expand Down
2 changes: 1 addition & 1 deletion examples/vision/detection/yolov7/quantize/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### 量化模型准备
- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.

## 以量化后的YOLOv7模型为例, 进行部署
```bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,4 +71,4 @@ PPTinyPosePipeline模型加载和初始化,其中det_model是使用`fd.vision.
- [Pipeline 模型介绍](..)
- [Pipeline C++部署](../cpp)
- [模型预测结果说明](../../../../../docs/api/vision_results/)
- [如何切换模型推理后端引擎](../../../../../docs/runtime/how_to_change_backend.md)
- [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md)
Loading