Skip to content

Commit

Permalink
[Doc] Add Python comments for external models (#408)
Browse files Browse the repository at this point in the history
* first commit for yolov7

* pybind for yolov7

* CPP README.md

* CPP README.md

* modified yolov7.cc

* README.md

* python file modify

* delete license in fastdeploy/

* repush the conflict part

* README.md modified

* README.md modified

* file path modified

* file path modified

* file path modified

* file path modified

* file path modified

* README modified

* README modified

* move some helpers to private

* add examples for yolov7

* api.md modified

* api.md modified

* api.md modified

* YOLOv7

* yolov7 release link

* yolov7 release link

* yolov7 release link

* copyright

* change some helpers to private

* change variables to const and fix documents.

* gitignore

* Transfer some funtions to private member of class

* Transfer some funtions to private member of class

* Merge from develop (#9)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* first commit for yolor

* for merge

* Develop (#11)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Yolor (#16)

* Develop (#11) (#12)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* Develop (#13)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* documents

* Develop (#14)

* Fix compile problem in different python version (#26)

* fix some usage problem in linux

* Fix compile problem

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>

* Add PaddleDetetion/PPYOLOE model support (#22)

* add ppdet/ppyoloe

* Add demo code and documents

* add convert processor to vision (#27)

* update .gitignore

* Added checking for cmake include dir

* fixed missing trt_backend option bug when init from trt

* remove un-need data layout and add pre-check for dtype

* changed RGB2BRG to BGR2RGB in ppcls model

* add model_zoo yolov6 c++/python demo

* fixed CMakeLists.txt typos

* update yolov6 cpp/README.md

* add yolox c++/pybind and model_zoo demo

* move some helpers to private

* fixed CMakeLists.txt typos

* add normalize with alpha and beta

* add version notes for yolov5/yolov6/yolox

* add copyright to yolov5.cc

* revert normalize

* fixed some bugs in yolox

* fixed examples/CMakeLists.txt to avoid conflicts

* add convert processor to vision

* format examples/CMakeLists summary

* Fix bug while the inference result is empty with YOLOv5 (#29)

* Add multi-label function for yolov5

* Update README.md

Update doc

* Update fastdeploy_runtime.cc

fix variable option.trt_max_shape wrong name

* Update runtime_option.md

Update resnet model dynamic shape setting name from images to x

* Fix bug when inference result boxes are empty

* Delete detection.py

Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>

* add is_dynamic for YOLO series (#22)

* modify ppmatting backend and docs

* modify ppmatting docs

* fix the PPMatting size problem

* fix LimitShort's log

* retrigger ci

* modify PPMatting docs

* modify the way  for dealing with  LimitShort

* add python comments for external models

* modify resnet c++ comments

* modify C++ comments for external models

* modify python comments and add result class comments

* fix comments compile error

* modify result.h comments

* modify yolor comments

Co-authored-by: Jason <jiangjiajun@baidu.com>
Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com>
Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com>
Co-authored-by: huangjianhui <852142024@qq.com>
Co-authored-by: Jason <928090362@qq.com>
  • Loading branch information
6 people authored Oct 25, 2022
1 parent 718dc32 commit 1f39b4f
Show file tree
Hide file tree
Showing 46 changed files with 1,038 additions and 238 deletions.
34 changes: 34 additions & 0 deletions docs/api_docs/python/face_detection.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Face Detection API

## fastdeploy.vision.facedet.RetinaFace

```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.RetinaFace
:members:
:inherited-members:
```


## fastdeploy.vision.facedet.SCRFD

```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.SCRFD
:members:
:inherited-members:
```

## fastdeploy.vision.facedet.UltraFace

```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.UltraFace
:members:
:inherited-members:
```

## fastdeploy.vision.facedet.YOLOv5Face

```{eval-rst}
.. autoclass:: fastdeploy.vision.facedet.YOLOv5Face
:members:
:inherited-members:
```
41 changes: 41 additions & 0 deletions docs/api_docs/python/face_recognition.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Face Recognition API

## fastdeploy.vision.faceid.AdaFace

```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.AdaFace
:members:
:inherited-members:
```

## fastdeploy.vision.faceid.CosFace

```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.CosFace
:members:
:inherited-members:
```

## fastdeploy.vision.faceid.ArcFace

```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.ArcFace
:members:
:inherited-members:
```

## fastdeploy.vision.faceid.PartialFC

```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.PartialFC
:members:
:inherited-members:
```

## fastdeploy.vision.faceid.VPL

```{eval-rst}
.. autoclass:: fastdeploy.vision.faceid.VPL
:members:
:inherited-members:
```
2 changes: 2 additions & 0 deletions docs/api_docs/python/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,6 @@ FastDeploy
image_classification.md
keypoint_detection.md
matting.md
face_recognition.md
face_detection.md
vision_results_en.md
16 changes: 15 additions & 1 deletion docs/api_docs/python/matting.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,17 @@
# Matting API

comming soon...
## fastdeploy.vision.matting.MODNet

```{eval-rst}
.. autoclass:: fastdeploy.vision.matting.MODNet
:members:
:inherited-members:
```

## fastdeploy.vision.matting.PPMatting

```{eval-rst}
.. autoclass:: fastdeploy.vision.matting.PPMatting
:members:
:inherited-members:
```
90 changes: 90 additions & 0 deletions docs/api_docs/python/object_detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,3 +63,93 @@
:members:
:inherited-members:
```

## fastdeploy.vision.detection.NanoDetPlus

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.NanoDetPlus
:members:
:inherited-members:
```

## fastdeploy.vision.detection.ScaledYOLOv4

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.ScaledYOLOv4
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOR

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOR
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOv5

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv5
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOv5Lite

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv5Lite
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOv6

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv6
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOv7

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv7
:members:
:inherited-members:
```


## fastdeploy.vision.detection.YOLOR

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOR
:members:
:inherited-members:
```


## fastdeploy.vision.detection.YOLOv7End2EndORT

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv7End2EndORT
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOv7End2EndTRT

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOv7End2EndTRT
:members:
:inherited-members:
```

## fastdeploy.vision.detection.YOLOX

```{eval-rst}
.. autoclass:: fastdeploy.vision.detection.YOLOX
:members:
:inherited-members:
```
13 changes: 7 additions & 6 deletions fastdeploy/vision/classification/contrib/resnet.h
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ namespace vision {
*
*/
namespace classification {
/*! @brief ResNet series model
/*! @brief Torchvision ResNet series model
*/
class FASTDEPLOY_DECL ResNet : public FastDeployModel {
public:
Expand All @@ -44,17 +44,18 @@ class FASTDEPLOY_DECL ResNet : public FastDeployModel {
virtual std::string ModelName() const { return "ResNet"; }
/** \brief Predict for the input "im", the result will be saved in "result".
*
* \param[in] im Input image for inference.
* \param[in] im The input image data, comes from cv::imread(), is a 3-D array with layout HWC, BGR format
* \param[in] result Saving the inference result.
* \param[in] topk The length of return values, e.g., if topk==2, the result will include the 2 most possible class label for input image.
*/
virtual bool Predict(cv::Mat* im, ClassifyResult* result, int topk = 1);

/// Tuple of (width, height)
/*! @brief
Argument for image preprocessing step, tuple of (width, height), decide the target size after resize
*/
std::vector<int> size;
/// Mean parameters for normalize
/// Mean parameters for normalize, size should be the the same as channels
std::vector<float> mean_vals;
/// Std parameters for normalize
/// Std parameters for normalize, size should be the the same as channels
std::vector<float> std_vals;


Expand Down
69 changes: 49 additions & 20 deletions fastdeploy/vision/common/result.h
Original file line number Diff line number Diff line change
Expand Up @@ -154,88 +154,117 @@ struct FASTDEPLOY_DECL OCRResult : public BaseResult {
std::string Str();
};

/*! @brief Face detection result structure for all the face detection models
*/
struct FASTDEPLOY_DECL FaceDetectionResult : public BaseResult {
// box: xmin, ymin, xmax, ymax
/** \brief All the detected object boxes for an input image, the size of `boxes` is the number of detected objects, and the element of `boxes` is a array of 4 float values, means [xmin, ymin, xmax, ymax]
*/
std::vector<std::array<float, 4>> boxes;
// landmark: x, y, landmarks may empty if the
// model don't detect face with landmarks.
// Note, one face might have multiple landmarks,
// such as 5/19/21/68/98/..., etc.
/** \brief
* If the model detect face with landmarks, every detected object box correspoing to a landmark, which is a array of 2 float values, means location [x,y]
*/
std::vector<std::array<float, 2>> landmarks;
/** \brief
* Indicates the confidence of all targets detected from a single image, and the number of elements is consistent with boxes.size()
*/
std::vector<float> scores;
ResultType type = ResultType::FACE_DETECTION;
// set landmarks_per_face manually in your post processes.
/** \brief
* `landmarks_per_face` indicates the number of face landmarks for each detected face
* if the model's output contains face landmarks (such as YOLOv5Face, SCRFD, ...)
*/
int landmarks_per_face;

FaceDetectionResult() { landmarks_per_face = 0; }
FaceDetectionResult(const FaceDetectionResult& res);

/// Clear detection result
void Clear();

void Reserve(int size);

void Resize(int size);

/// Debug function, convert the result to string to print
std::string Str();
};

/*! @brief Segmentation result structure for all the segmentation models
*/
struct FASTDEPLOY_DECL SegmentationResult : public BaseResult {
// mask
/** \brief
* `label_map` stores the pixel-level category labels for input image. the number of pixels is equal to label_map.size()
*/
std::vector<uint8_t> label_map;
/** \brief
* `score_map` stores the probability of the predicted label for each pixel of input image.
*/
std::vector<float> score_map;
/// The output shape, means [H, W]
std::vector<int64_t> shape;
bool contain_score_map = false;

ResultType type = ResultType::SEGMENTATION;

/// Clear detection result
void Clear();

void Reserve(int size);

void Resize(int size);

/// Debug function, convert the result to string to print
std::string Str();
};

/*! @brief Face recognition result structure for all the Face recognition models
*/
struct FASTDEPLOY_DECL FaceRecognitionResult : public BaseResult {
// face embedding vector with 128/256/512 ... dim
/** \brief The feature embedding that represents the final extraction of the face recognition model can be used to calculate the feature similarity between faces.
*/
std::vector<float> embedding;

ResultType type = ResultType::FACE_RECOGNITION;

FaceRecognitionResult() {}
FaceRecognitionResult(const FaceRecognitionResult& res);

/// Clear detection result
void Clear();

void Reserve(int size);

void Resize(int size);

/// Debug function, convert the result to string to print
std::string Str();
};

/*! @brief Matting result structure for all the Matting models
*/
struct FASTDEPLOY_DECL MattingResult : public BaseResult {
// alpha matte and fgr (predicted foreground: HWC/BGR float32)
/** \brief
`alpha` is a one-dimensional vector, which is the predicted alpha transparency value. The range of values is [0., 1.], and the length is hxw. h, w are the height and width of the input image
*/
std::vector<float> alpha; // h x w
/** \brief
If the model can predict foreground, `foreground` save the predicted foreground image, the shape is [hight,width,channel] generally.
*/
std::vector<float> foreground; // h x w x c (c=3 default)
// height, width, channel for foreground and alpha
// must be (h,w,c) and setup before Reserve and Resize
// c is only for foreground if contain_foreground is true.
/** \brief
* The shape of output result, when contain_foreground == false, shape only contains (h, w), when contain_foreground == true, shape contains (h, w, c), and c is generally 3
*/
std::vector<int64_t> shape;
/** \brief
If the model can predict alpha matte and foreground, contain_foreground = true, default false
*/
bool contain_foreground = false;

ResultType type = ResultType::MATTING;

MattingResult() {}
MattingResult(const MattingResult& res);

/// Clear detection result
void Clear();

void Reserve(int size);

void Resize(int size);

/// Debug function, convert the result to string to print
std::string Str();
};

Expand Down
Loading

0 comments on commit 1f39b4f

Please sign in to comment.