The Pytorch implementation is DBNet.
-
- generate .wts
Download code and model from DBNet and config your environments.
In tools/predict.py, set
save_wts
asTrue
, and run, the .wts will be generated.onnx can also be exported, just need to set
onnx
asTrue
. -
- cmake and make
mkdir build cd build cmake .. make sudo ./dbnet -s // serialize model to plan file i.e. 'DBNet.engine' sudo ./dbnet -d ../samples // deserialize plan file and run inference, the images in samples will be processed.
https://github.com/BaofengZan/DBNet-TensorRT
-
In common.hpp, the following two functions can be merged.
ILayer* convBnLeaky(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
ILayer* convBnLeaky2(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
-
- The postprocess method here should be optimized, which is a little different from pytorch side.
-
The input image here is resized to 640x640 directly, while the pytorch side is usingletterbox
method.