This repository shows an example of how to use the ONNX standard to interoperate between different frameworks. In this example,we train a model with PyTorch and make predictions with Tensorflow, ONNX Runtime, and Caffe2.
If you want to understand the details about how this model was created, take a look at this very clear and detailed explanation: ONNX: Preventing Framework Lock in
The aim of this example is to demonstrate how to use the ONNX standard to be able to interoperate between different Deep Learning frameworks. The architecture of the example is given as follows, we are going to train a classifier in PyTorch, then we are going to use this trained model to perform inference in Tensorflow, Caffe2 and ONNX Runtime. The architecture of the example is given as follows:
- data: Here you will find the dataset generator
- model: It contains the definition of the PyTorch model as well as the training function
- onnx: Here it will be saved the exported Pytorch model as ONNX file
- src: It contains the class where each evaluator is called
- main.py: This file trigger the entire pipeline (data generation, training, export and load the onnx model, inference)
You just need to type
python main.py
however, I recommend you to work with the docker container, just need to build and run the image.
Feel free to fork the model and add your own suggestiongs.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/YourGreatFeature
) - Commit your Changes (
git commit -m 'Add some YourGreatFeature'
) - Push to the Branch (
git push origin feature/YourGreatFeature
) - Open a Pull Request
If you have any question, feel free to reach me out at:
Distributed under the MIT License. See LICENSE.md
for more information.