-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Converting e2e_faster_rcnn_X_101_32x8d_FPN_1x_caffe2.pkl to protobuf #24
Comments
Hi, Could you expand on what you are trying to do? Thanks! |
I'm trying to run this model as part of a C++ project and was attempting to convert the pkl file to a pytorch model file, however, being new to pytorch I wasn't sure how to proceed with this. So, I was trying to use Detectron's script for converting the model to protobuf. Do you have any pointers for that? |
Hi, If you want to use Detectron for C++ deployment, you should ask your questions in https://github.com/facebookresearch/Detectron, as they will be in a better position to help you. That being said, I will at some point add support for exporting PyTorch models into C++ using new functionality from PyTorch 1.0. But it might still take a few weeks for that. Given that this doesn't look like a question / issue with |
Hi, thanks! Sorry, if I miscommunicated, I meant I have been trying to deploy I see, could you point me to how I should go about deploying this on C++? Maybe I can implement it and open a PR? Thanks! |
Hi, I indeed misunderstood your question. I'll be looking into supporting jit tracing in the future. This will probably involve modifying the way I use C++ extensions. I can send you some pointers on how to approach that, but I don't have all the answers ahead of time, and there might be some subtle issues that might require some digging. I'm currently tracking this issue in #27, so to avoid duplicates I'll be closing this issue. Let's continue the discussion there. |
❓ Questions and Help
Do we use the Detectron's script to convert the pkl config files to protobuf for C++ deployment?
I'm getting the following error when I try to convert the aforementioned file using that python script:
The text was updated successfully, but these errors were encountered: