diff --git a/examples/vision/detection/README.md b/examples/vision/detection/README.md index f87d72d916..63851c076e 100644 --- a/examples/vision/detection/README.md +++ b/examples/vision/detection/README.md @@ -1,14 +1,20 @@ -人脸检测模型 +# 目标检测模型 FastDeploy目前支持如下目标检测模型部署 | 模型 | 说明 | 模型格式 | 版本 | | :--- | :--- | :------- | :--- | -| [nanodet_plus](./nanodet_plus) | NanoDetPlus系列模型 | ONNX | Release/v1.0.0-alpha-1 | -| [yolov5](./yolov5) | YOLOv5系列模型 | ONNX | Release/v6.0 | -| [yolov5lite](./yolov5lite) | YOLOv5-Lite系列模型 | ONNX | Release/v1.4 | -| [yolov6](./yolov6) | YOLOv6系列模型 | ONNX | Release/0.1.0 | -| [yolov7](./yolov7) | YOLOv7系列模型 | ONNX | Release/0.1 | -| [yolor](./yolor) | YOLOR系列模型 | ONNX | Release/weights | -| [yolox](./yolox) | YOLOX系列模型 | ONNX | Release/v0.1.1 | -| [scaledyolov4](./scaledyolov4) | ScaledYOLOv4系列模型 | ONNX | CommitID:6768003 | +| [PaddleDetection/PPYOLOE](./paddledetection) | PPYOLOE系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) | +| [PaddleDetection/PicoDet](./paddledetection) | PicoDet系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) | +| [PaddleDetection/YOLOX](./paddledetection) | Paddle版本的YOLOX系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) | +| [PaddleDetection/YOLOv3](./paddledetection) | YOLOv3系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) | +| [PaddleDetection/PPYOLO](./paddledetection) | PPYOLO系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) | +| [PaddleDetection/FasterRCNN](./paddledetection) | FasterRCNN系列模型 | Paddle | [Release/2.4](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4) | +| [WongKinYiu/YOLOv7](./yolov7) | YOLOv7、YOLOv7-X等系列模型 | ONNX | [Release/v0.1](https://github.com/WongKinYiu/yolov7/tree/v0.1) | +| [RangiLyu/NanoDetPlus](./nanodet_plus) | NanoDetPlus 系列模型 | ONNX | [Release/v1.0.0-alpha-1](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) | +| [ultralytics/YOLOv5](./yolov5) | YOLOv5 系列模型 | ONNX | [Release/v6.0](https://github.com/ultralytics/yolov5/tree/v6.0) | +| [ppogg/YOLOv5-Lite](./yolov5lite) | YOLOv5-Lite 系列模型 | ONNX | [Release/v1.4](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) | +| [meituan/YOLOv6](./yolov6) | YOLOv6 系列模型 | ONNX | [Release/0.1.0](https://github.com/meituan/YOLOv6/releases/download/0.1.0) | +| [WongKinYiu/YOLOR](./yolor) | YOLOR 系列模型 | ONNX | [Release/weights](https://github.com/WongKinYiu/yolor/releases/tag/weights) | +| [Megvii-BaseDetection/YOLOX](./yolox) | YOLOX 系列模型 | ONNX | [Release/v0.1.1](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) | +| [WongKinYiu/ScaledYOLOv4](./scaledyolov4) | ScaledYOLOv4 系列模型 | ONNX | [CommitID: 6768003](https://github.com/WongKinYiu/ScaledYOLOv4/commit/676800364a3446900b9e8407bc880ea2127b3415) | diff --git a/examples/vision/detection/nanodet_plus/README.md b/examples/vision/detection/nanodet_plus/README.md index a295e122fc..8ad107d9c5 100644 --- a/examples/vision/detection/nanodet_plus/README.md +++ b/examples/vision/detection/nanodet_plus/README.md @@ -1,11 +1,10 @@ # NanoDetPlus准备部署模型 -## 模型版本说明 - NanoDetPlus部署实现来自[NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) 的代码,基于coco的[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)。 - - (1)[预训练模型](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)的*.onnx可直接进行部署; - - (2)自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。 + - (1)[官方库](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1)提供的*.onnx可直接进行部署; + - (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。 ## 下载预训练ONNX模型 diff --git a/examples/vision/detection/scaledyolov4/README.md b/examples/vision/detection/scaledyolov4/README.md index 5a0ba000f1..b86ab7f79f 100644 --- a/examples/vision/detection/scaledyolov4/README.md +++ b/examples/vision/detection/scaledyolov4/README.md @@ -2,7 +2,7 @@ - ScaledYOLOv4部署实现来自[ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/ScaledYOLOv4)。 - - (1)[预训练模型](https://github.com/WongKinYiu/ScaledYOLOv4)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署; + - (1)[官方库](https://github.com/WongKinYiu/ScaledYOLOv4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - (2)自己数据训练的ScaledYOLOv4模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。 diff --git a/examples/vision/detection/yolor/README.md b/examples/vision/detection/yolor/README.md index 7889eac9f6..b7a4ff8513 100644 --- a/examples/vision/detection/yolor/README.md +++ b/examples/vision/detection/yolor/README.md @@ -2,7 +2,7 @@ - YOLOR部署实现来自[YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights)的代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/yolor/releases/tag/weights)。 - - (1)[预训练模型](https://github.com/WongKinYiu/yolor/releases/tag/weights)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署; + - (1)[官方库](https://github.com/WongKinYiu/yolor/releases/tag/weights)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - (2)自己数据训练的YOLOR模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。 diff --git a/examples/vision/detection/yolov5/README.md b/examples/vision/detection/yolov5/README.md index e83dcdd504..79a1b6fef3 100644 --- a/examples/vision/detection/yolov5/README.md +++ b/examples/vision/detection/yolov5/README.md @@ -1,9 +1,7 @@ # YOLOv5准备部署模型 -## 模型版本说明 - - YOLOv5 v6.0部署模型实现来自[YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.0),和[基于COCO的预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0) - - (1)[预训练模型](https://github.com/ultralytics/yolov5/releases/tag/v6.0)的*.onnx可直接进行部署; + - (1)[官方库](https://github.com/ultralytics/yolov5/releases/tag/v6.0)提供的*.onnx可直接进行部署; - (2)开发者基于自己数据训练的YOLOv5 v6.0模型,可使用[YOLOv5](https://github.com/ultralytics/yolov5)中的`export.py`导出ONNX文件后后,完成部署。 diff --git a/examples/vision/detection/yolov5lite/README.md b/examples/vision/detection/yolov5lite/README.md index 4b95967d1d..8eafee619b 100644 --- a/examples/vision/detection/yolov5lite/README.md +++ b/examples/vision/detection/yolov5lite/README.md @@ -3,7 +3,7 @@ - YOLOv5Lite部署实现来自[YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) 代码,和[基于COCO的预训练模型](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。 - - (1)[预训练模型](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署; + - (1)[官方库](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - (2)自己数据训练的YOLOv5Lite模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。 diff --git a/examples/vision/detection/yolov6/README.md b/examples/vision/detection/yolov6/README.md index 497778fe0f..fcef8d588a 100644 --- a/examples/vision/detection/yolov6/README.md +++ b/examples/vision/detection/yolov6/README.md @@ -1,11 +1,10 @@ # YOLOv6准备部署模型 -## 模型版本说明 - YOLOv6 部署实现来自[YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0),和[基于coco的预训练模型](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)。 - - (1)[基于coco的预训练模型](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)的*.onnx可直接进行部署; - - (2)自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。 + - (1)[官方库](https://github.com/meituan/YOLOv6/releases/tag/0.1.0)提供的*.onnx可直接进行部署; + - (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。 diff --git a/examples/vision/detection/yolov7/README.md b/examples/vision/detection/yolov7/README.md index 266aeace2d..e911defeed 100644 --- a/examples/vision/detection/yolov7/README.md +++ b/examples/vision/detection/yolov7/README.md @@ -2,8 +2,10 @@ - YOLOv7部署实现来自[YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1)分支代码,和[基于COCO的预训练模型](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)。 - - (1)[预训练模型](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署; - - (2)自己数据训练的YOLOv7 0.1模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。 + - (1)[官方库](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1)提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署;*.trt和*.pose模型不支持部署; + - (2)自己数据训练的YOLOv7模型,按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)操作后,参考[详细部署文档](#详细部署文档)完成部署。 + + ## 导出ONNX模型 diff --git a/examples/vision/detection/yolox/README.md b/examples/vision/detection/yolox/README.md index 193089904b..fee7ac9541 100644 --- a/examples/vision/detection/yolox/README.md +++ b/examples/vision/detection/yolox/README.md @@ -1,11 +1,11 @@ # YOLOX准备部署模型 -## 模型版本说明 - YOLOX部署实现来自[YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0),基于[coco的预训练模型](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)。 - - (1)[预训练模型](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)中的*.pth通过导出ONNX模型操作后,可进行部署;*.onnx、*.trt和*.pose模型不支持部署; - - (2)开发者基于自己数据训练的YOLOX v0.1.1模型,可按照导出ONNX模型后,完成部署。 + - (1)[官方库](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)提供中的*.pth通过导出ONNX模型操作后,可进行部署; + - (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。 + ## 下载预训练ONNX模型 diff --git a/examples/vision/facedet/README.md b/examples/vision/facedet/README.md index 9e2fc50145..cde8c71b80 100644 --- a/examples/vision/facedet/README.md +++ b/examples/vision/facedet/README.md @@ -4,7 +4,7 @@ FastDeploy目前支持如下人脸检测模型部署 | 模型 | 说明 | 模型格式 | 版本 | | :--- | :--- | :------- | :--- | -| [retinaface](./retinaface) | RetinaFace系列模型 | ONNX | CommitID:b984b4b | -| [ultraface](./ultraface) | UltraFace系列模型 | ONNX |CommitID:dffdddd | -| [yolov5face](./yolov5face) | YOLOv5Face系列模型 | ONNX | CommitID:4fd1ead | -| [scrfd](./scrfd) | SCRFD系列模型 | ONNX | CommitID:17cdeab | +| [biubug6/RetinaFace](./retinaface) | RetinaFace 系列模型 | ONNX | [CommitID:b984b4b](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b) | +| [Linzaer/UltraFace](./ultraface) | UltraFace 系列模型 | ONNX |[CommitID:dffdddd](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd) | +| [deepcam-cn/YOLOv5Face](./yolov5face) | YOLOv5Face 系列模型 | ONNX | [CommitID:4fd1ead](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead) | +| [deepinsight/SCRFD](./scrfd) | SCRFD 系列模型 | ONNX | [CommitID:17cdeab](https://github.com/deepinsight/insightface/tree/17cdeab12a35efcebc2660453a8cbeae96e20950) | diff --git a/examples/vision/facedet/retinaface/README.md b/examples/vision/facedet/retinaface/README.md index 525b6cb2f2..6aeb113ff0 100644 --- a/examples/vision/facedet/retinaface/README.md +++ b/examples/vision/facedet/retinaface/README.md @@ -1,10 +1,9 @@ # RetinaFace准备部署模型 -## 模型版本说明 - - [RetinaFace](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b) - - (1)[链接中](https://github.com/biubug6/Pytorch_Retinaface/commit/b984b4b)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - - (2)自己数据训练的RetinaFace CommitID:b984b4b模型,可按照[导出ONNX模型](#导出ONNX模型)后,完成部署。 + - (1)[官方库](https://github.com/biubug6/Pytorch_Retinaface/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)自己数据训练的RetinaFace模型,可按照[导出ONNX模型](#导出ONNX模型)后,完成部署。 + ## 导出ONNX模型 diff --git a/examples/vision/facedet/scrfd/README.md b/examples/vision/facedet/scrfd/README.md index d1694d2c1c..8434a3942e 100644 --- a/examples/vision/facedet/scrfd/README.md +++ b/examples/vision/facedet/scrfd/README.md @@ -1,10 +1,10 @@ # SCRFD准备部署模型 -## 模型版本说明 - [SCRFD](https://github.com/deepinsight/insightface/tree/17cdeab12a35efcebc2660453a8cbeae96e20950) - - (1)[链接中](https://github.com/deepinsight/insightface/tree/17cdeab12a35efcebc2660453a8cbeae96e20950)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - - (2)开发者基于自己数据训练的SCRFD CID:17cdeab模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + - (1)[官方库](https://github.com/deepinsight/insightface/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)开发者基于自己数据训练的SCRFD模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + ## 导出ONNX模型 diff --git a/examples/vision/facedet/ultraface/README.md b/examples/vision/facedet/ultraface/README.md index 678fb771f7..cd88f0ceff 100644 --- a/examples/vision/facedet/ultraface/README.md +++ b/examples/vision/facedet/ultraface/README.md @@ -1,9 +1,10 @@ # UltraFace准备部署模型 -## 模型版本说明 - [UltraFace](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd) - - (1)[链接中](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/commit/dffdddd)的*.onnx可下载, 也可以通过下面模型链接下载并进行部署 + - (1)[官方库](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/)中提供的*.onnx可下载, 也可以通过下面模型链接下载并进行部署 + - (2)开发者自己训练的模型,导出ONNX模型后,参考[详细部署文档](#详细部署文档)完成部署。 + ## 下载预训练ONNX模型 diff --git a/examples/vision/facedet/yolov5face/README.md b/examples/vision/facedet/yolov5face/README.md index 424a76bbed..d9dc9f949d 100644 --- a/examples/vision/facedet/yolov5face/README.md +++ b/examples/vision/facedet/yolov5face/README.md @@ -1,9 +1,7 @@ # YOLOv5Face准备部署模型 -## 模型版本说明 - - [YOLOv5Face](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead) - - (1)[链接中](https://github.com/deepcam-cn/yolov5-face/commit/4fd1ead)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (1)[官方库](https://github.com/deepcam-cn/yolov5-face/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - (2)开发者基于自己数据训练的YOLOv5Face模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 ## 导出ONNX模型 diff --git a/examples/vision/faceid/README.md b/examples/vision/faceid/README.md index 4950463a1a..506c65792b 100644 --- a/examples/vision/faceid/README.md +++ b/examples/vision/faceid/README.md @@ -1,10 +1,11 @@ -人脸检测模型 +# 人脸识别模型 + FastDeploy目前支持如下人脸识别模型部署 | 模型 | 说明 | 模型格式 | 版本 | | :--- | :--- | :------- | :--- | -| [arcface](./insightface) | ArcFace系列模型 | ONNX | CommitID:babb9a5 | -| [cosface](./insightface) | CosFace系列模型 | ONNX | CommitID:babb9a5 | -| [partial_fc](./insightface) | PartialFC系列模型 | ONNX | CommitID:babb9a5 | -| [vpl](./insightface) | VPL系列模型 | ONNX | CommitID:babb9a5 | +| [deepinsight/ArcFace](./insightface) | ArcFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) | +| [deepinsight/CosFace](./insightface) | CosFace 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) | +| [deepinsight/PartialFC](./insightface) | PartialFC 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) | +| [deepinsight/VPL](./insightface) | VPL 系列模型 | ONNX | [CommitID:babb9a5](https://github.com/deepinsight/insightface/commit/babb9a5) | diff --git a/examples/vision/faceid/insightface/README.md b/examples/vision/faceid/insightface/README.md index cf3371247a..981c898e57 100644 --- a/examples/vision/faceid/insightface/README.md +++ b/examples/vision/faceid/insightface/README.md @@ -1,9 +1,7 @@ # InsightFace准备部署模型 -## 模型版本说明 - - [InsightFace](https://github.com/deepinsight/insightface/commit/babb9a5) - - (1)[链接中](https://github.com/deepinsight/insightface/commit/babb9a5)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (1)[官方库](https://github.com/deepinsight/insightface/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - (2)开发者基于自己数据训练的InsightFace模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 diff --git a/examples/vision/faceid/insightface/python/infer_arcface.py b/examples/vision/faceid/insightface/python/infer_arcface.py index 2d725026e1..a9846b4cc8 100644 --- a/examples/vision/faceid/insightface/python/infer_arcface.py +++ b/examples/vision/faceid/insightface/python/infer_arcface.py @@ -18,7 +18,7 @@ def parse_arguments(): import ast parser = argparse.ArgumentParser() parser.add_argument( - "--model", required=True, help="Path of scrfd onnx model.") + "--model", required=True, help="Path of insgihtface onnx model.") parser.add_argument( "--face", required=True, help="Path of test face image file.") parser.add_argument( diff --git a/examples/vision/faceid/insightface/python/infer_cosface.py b/examples/vision/faceid/insightface/python/infer_cosface.py index 07f1a0b14b..7b45f7a402 100644 --- a/examples/vision/faceid/insightface/python/infer_cosface.py +++ b/examples/vision/faceid/insightface/python/infer_cosface.py @@ -18,7 +18,7 @@ def parse_arguments(): import ast parser = argparse.ArgumentParser() parser.add_argument( - "--model", required=True, help="Path of scrfd onnx model.") + "--model", required=True, help="Path of insightface onnx model.") parser.add_argument( "--face", required=True, help="Path of test face image file.") parser.add_argument( diff --git a/examples/vision/faceid/insightface/python/infer_partial_fc.py b/examples/vision/faceid/insightface/python/infer_partial_fc.py index b931af0dff..b1b2f3bf1d 100644 --- a/examples/vision/faceid/insightface/python/infer_partial_fc.py +++ b/examples/vision/faceid/insightface/python/infer_partial_fc.py @@ -18,7 +18,7 @@ def parse_arguments(): import ast parser = argparse.ArgumentParser() parser.add_argument( - "--model", required=True, help="Path of scrfd onnx model.") + "--model", required=True, help="Path of insightface onnx model.") parser.add_argument( "--face", required=True, help="Path of test face image file.") parser.add_argument( diff --git a/examples/vision/faceid/insightface/python/infer_vpl.py b/examples/vision/faceid/insightface/python/infer_vpl.py index 14c25bfb47..7618913f7d 100644 --- a/examples/vision/faceid/insightface/python/infer_vpl.py +++ b/examples/vision/faceid/insightface/python/infer_vpl.py @@ -18,7 +18,7 @@ def parse_arguments(): import ast parser = argparse.ArgumentParser() parser.add_argument( - "--model", required=True, help="Path of scrfd onnx model.") + "--model", required=True, help="Path of insightface onnx model.") parser.add_argument( "--face", required=True, help="Path of test face image file.") parser.add_argument( diff --git a/examples/vision/matting/README.md b/examples/vision/matting/README.md index 1fba41f3e8..1076d14b45 100644 --- a/examples/vision/matting/README.md +++ b/examples/vision/matting/README.md @@ -1,7 +1,7 @@ -人脸检测模型 +# 抠图模型 -FastDeploy目前支持如下人脸识别模型部署 +FastDeploy目前支持如下抠图模型部署 | 模型 | 说明 | 模型格式 | 版本 | | :--- | :--- | :------- | :--- | -| [modnet](./modnet) | MODNet系列模型 | ONNX | CommitID:28165a4 | +| [ZHKKKe/MODNet](./modnet) | MODNet 系列模型 | ONNX | [CommitID:28165a4](https://github.com/ZHKKKe/MODNet/commit/28165a4) | diff --git a/examples/vision/matting/modnet/README.md b/examples/vision/matting/modnet/README.md index dbeb901fed..31c0718c8c 100644 --- a/examples/vision/matting/modnet/README.md +++ b/examples/vision/matting/modnet/README.md @@ -1,10 +1,8 @@ # MODNet准备部署模型 -## 模型版本说明 - - [MODNet](https://github.com/ZHKKKe/MODNet/commit/28165a4) - - (1)[链接中](https://github.com/ZHKKKe/MODNet/commit/28165a4)的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; - - (2)开发者基于自己数据训练的MODNet CommitID:b984b4b模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 + - (1)[官方库](https://github.com/ZHKKKe/MODNet/)中提供的*.pt通过[导出ONNX模型](#导出ONNX模型)操作后,可进行部署; + - (2)开发者基于自己数据训练的MODNet模型,可按照[导出ONNX模型](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B)后,完成部署。 ## 导出ONNX模型