Skip to content
This repository has been archived by the owner on Aug 28, 2024. It is now read-only.

Getting Lite Interpreter #202

Open
JosephKKim opened this issue Nov 5, 2021 · 35 comments
Open

Getting Lite Interpreter #202

JosephKKim opened this issue Nov 5, 2021 · 35 comments

Comments

@JosephKKim
Copy link

Hello thank you for the nice work you provided.
I am currently working on a project that uses yolo on android device...
I was so happy to find out these examples but somehow, it doesn't work on my environment.
Since I am new to android, even though I have experiences in PyTorch, it is hard to fix the code..
I am keep getting the error starts with
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.demo.objectdetection/org.pytorch.demo.objectdetection.MainActivity}: com.facebook.jni.CppException: Lite Interpreter verson number does not match. The model version must be between 3 and 5But the model version is 7 () Exception raised from parseMethods at ../torch/csrc/jit/mobile/import.cpp:320 (most recent call first):

I can guess that this error is from LiteModuleLoader.. but have no idea how to fix it and the meaning of the interpreter version.
I would be glad if I get an answer thanks! :)

@Michael97
Copy link

I have the same issue. I follow the steps exactly, use their own model and get this error.

@Treshank
Copy link

Any fix/workarounds found?

@JosephKKim
Copy link
Author

@Treshank No I am making this issue just to let author know this codes are not working properly...

@Treshank
Copy link

Same issue also in TorchVideo as well.

@andreanne-lemay
Copy link

I am getting the same issue as well.

@JosephKKim
Copy link
Author

@Treshank @andreanne-lemay
Okay... Let's see if author is still working on this git repository...!

@Michael97
Copy link

I've tried resetting my local repo to this commit - cd35a009ba964331abccd30f6fa0614224105d39 as suggested but it doesn't exist (as far as I can see).

@Treshank
Copy link

@Michael97, i think they mean in model making, try setting the yolov5 repo to that, if you are using a custom model. I guess it doesn't apply to you otherwise

@Michael97
Copy link

Ah yeah, that makes sense. I'm using the provided one right now, well at least trying to use it.

@Treshank
Copy link

I tried it @Michael97, no luck..

@Treshank
Copy link

The last git version that is sorta working for me is #141. It uses older PyTorch, still yet to test with custom model.

@Treshank
Copy link

I can't seem to be able to use my trained model with #141. If anyone has been able use/train a model, some instructions would be great

@stefan-falk
Copy link

Any news on this issue? @Treshank were you able to fix this?

@andreanne-lemay
Copy link

@Treshank @stefan-falk I was able to run my custom model (classification densenet) without the version number error by reverting to the commit @Treshank indicated (#141) on the HelloWorld demo. This also implicates going back to

    implementation 'org.pytorch:pytorch_android:1.8.0'
    implementation 'org.pytorch:pytorch_android_torchvision:1.8.0'

@stefan-falk
Copy link

@andreanne-lemay thanks!

I didn't try this with a pytorch model though. I was using a Tensorflow model (tflite).

implementation 'org.tensorflow:tensorflow-lite-support:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-metadata:0.1.0'
implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:0.0.0-nightly-SNAPSHOT'

But I guess I'll give pytorch a try then. 👍

@Treshank
Copy link

@andreanne-lemay, what version of pytorch did you use to make the object detection model?

@andreanne-lemay
Copy link

@Treshank I used pytorch 1.10.0 and the following lines to convert my model: https://github.com/andreanne-lemay/cervical_mobile_app/blob/main/mobile_model_conversion.py

@Treshank
Copy link

@andreanne-lemay Thanks!! your solution works! Didnt have to use the converter tho, used the standard export.py with torchscript include option. #141 version uses .pt not .plt keep that in mind
Also for reference im using the object detection app, and a yolov5m custom model

@stefan-falk
Copy link

I can run the speech recognition example now with:

implementation 'org.pytorch:pytorch_android_lite:1.10.0'

Thanks @andreanne-lemay for pointing me there 👍

@Treshank
Copy link

yes @stefan-falk. Your solution is also working. Using the latest master branch, and simply changing in build.gradle
implementation 'org.pytorch:pytorch_android_lite:1.9.0'
to
implementation 'org.pytorch:pytorch_android_lite:1.10.0'
works.

@stefan-falk
Copy link

I think I already tried that and it didn't work for some (probably other) reason. But never mind as long as it works now :)

@raedle
Copy link

raedle commented Feb 16, 2022

The torch.jit.mobile has a _backport_for_mobile function to "backport" a model to a given version

from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

@celikmustafa89
Copy link

celikmustafa89 commented Mar 4, 2022

The torch.jit.mobile has a _backport_for_mobile function to "backport" a model to a given version

from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

hi raedle,
firstly, it works for me thank you for this lifesaving post.
however, there is small missing part:
this method increase the size of the model. Version-7 was 34 MB, this one is 68 MB. It doubled the size.

Is there any solution that we can apply without increasing the size of the model?

@MaratZakirov
Copy link

The torch.jit.mobile has a _backport_for_mobile function to "backport" a model to a given version

from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

MODEL_INPUT_FILE = "model_v7.ptl"
MODEL_OUTPUT_FILE = "model_v5.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

It seems to be working, but I suppose it is better to change build.gradle to

implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'

@vemusharan
Copy link

Even I faced the similar issue, Moving to torch version 1.11.0 resolved the issue for me
!pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html

@ahmadbajwa8282
Copy link

@MaratZakirov I'm getting this error on doing the backport

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
from torch.jit.mobile import (
    _backport_for_mobile,
    _get_model_bytecode_version,
)

model = torch.hub.load('pytorch/vision:v0.10.0', 'deeplabv3_mobilenet_v3_large', pretrained=True)
model.eval()

scripted_module = torch.jit.script(model)
# Export full jit version model (not compatible mobile interpreter), leave it here for comparison
# scripted_module._save_for_lite_interpreter("deeplabv3_scripted.pt")
# Export mobile interpreter version model (compatible with mobile interpreter)
optimized_scripted_module = optimize_for_mobile(scripted_module)
optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")


    
MODEL_INPUT_FILE = "deeplabv3_scripted.ptl"
MODEL_OUTPUT_FILE = "deeplabv5_scripted.ptl"

print("model version", _get_model_bytecode_version(f_input=MODEL_INPUT_FILE))

_backport_for_mobile(f_input=MODEL_INPUT_FILE, f_output=MODEL_OUTPUT_FILE, to_version=5)

print("new model version", _get_model_bytecode_version(MODEL_OUTPUT_FILE))

@wziwen
Copy link

wziwen commented Nov 30, 2022

when i try to run ImageSegmentation demo, i get "model version must be between 3 and 7 but the model version is 8 ()"
after a few search, i realized this may cause by the version diffience between pytorch optimizing model and the version in build.gradle, then i use the last version of 'org.pytorch:pytorch_android_lite' which is 1.12.2, and the problem gone.
the last version use on mobile can be found at: https://search.maven.org/artifact/org.pytorch/pytorch_android_lite
and pytorch on computor for optimizing model at: https://github.com/pytorch/vision/releases

@nighthawk2032
Copy link

Don't know if this case still open, but I guess this will fix it:

Make sure you use in the android build.gradle

implementation 'org.pytorch:pytorch_android_lite:1.12.2'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.12.2'

or any other higher pytorch library version on the android side.

@HripsimeS
Copy link

@JosephKKim Hello. Were you able to fix the issue? If yes, can you please share your experience what did you modify.
Thanks a lot in advance!

@nighthawk2032
Copy link

Sharing what I experienced..

I've trained a Model using YOLOv5s with latest torch version, and export it to torchscript -- probably made as version 8 (in my case).

To fix the error which @JosephKKim initially presented (which I had as well) -- I just changed the library on the build.gradle used to :
"implementation 'org.pytorch:pytorch_android_lite:1.12.2'"
"implementation 'org.pytorch:pytorch_android_torchvision_lite:1.12.2'"
Android ObjectDetection did run and identify the Model (after that change), however -- detection and predictions are with very low percentage, if at all.

I used the same Model on the iOS ObjectDetection -- https://github.com/pytorch/ios-demo-app/tree/master/ObjectDetection -- it ran flawlessly.. I had to change the code of the Podfile to use:
"pod 'LibTorch-Lite', '~>1.13.0'"

So, I guess, the Model was trained and exported properly. But the android libraries are out-of-date? don't know exactly, still checking that.

@HripsimeS
Copy link

@nighthawk2032 thanks for your quick reply. Can you please share the link for export.py file you used to convert .pt model to .torchscript.ptl ?
In this case I'll make sure I do things the same way to fix the issue. Thanks a lot in advance :)

@nighthawk2032
Copy link

export.py is part of the YOLOv5 git repo https://github.com/ultralytics/yolov5/
Noting again, although the talked error wasn't raised,. it still failed to detect properly (on the android ObjectDetect demo)

@HripsimeS
Copy link

@nighthawk2032 thanks for your reply. The initial yolov5s.torchscript.ptl model mentioned in Prepare the model part works good on both PyTorch lib versions 1.10.0, but with 1.12.2.
My custom model with 1.12.2 version app is launching but can't get good predictions, especially LIVE real time detection.

As the export.py in official YOLO5 does not really convert pt model to torchscript.ptl (in converts to torchscript) https://github.com/ultralytics/yolov5/blob/master/export.py
I used this export.py which converts to torchscript.ptl https://github.com/jeffxtang/yolov5/blob/master/models/export.py

In Prepare the model part it was also mentioned to use the following command if there is any issue with the model
git reset --hard cd35a009ba964331abccd30f6fa0614224105d39
This error came out eventually fatal: Could not parse object 'cd35a009ba964331abccd30f6fa0614224105d39'.
When I run this command git checkout cd35a009ba964331abccd30f6fa0614224105d39
This error we can see fatal: reference is not a tree: cd35a009ba964331abccd30f6fa0614224105d39

Seems that reference is not acctual anymore and maybe there is a new number ?

@nighthawk2032
Copy link

@HripsimeS follow the readme.md on the page: https://github.com/pytorch/android-demo-app/tree/master/ObjectDetection
you need to add some script lines on the export.py file,. in order to export a .ptl file.
As I tested that myself, the .torchscript and .torchscript.ptl are basically (almost) identical. Yet, there is a slight difference in the Model exported. The script does optimize the exported Model for mobile.

As I noted before, on my tests, so far, the torchscript.ptl Model seems to be exported properly (and working on the iOS demo). But the android code is faulty somewhere, I suspect on the linked pytorch_android_lite library itself. But that is just a guess..

If there is consistency (cross platformed) in the numbering of the versions, then iOS uses version 1.13 while android got only upto 1.12.2. In addition in the android-demo-app repo the description is that the demo written for PyTorch 1.10 -- since I didn't find any access to the archived pytoch 1.10 -- then I can not train my model example using it, so it'll be best fit as it was written back then.

as for the latter part of your question -- seems to me like a git issue.. not sure where are you getting that from, and what did you run to raise these git branch errors.

@HripsimeS
Copy link

HripsimeS commented Dec 14, 2022

@nighthawk2032 thanks for your reply! git reset --hard was recommened in https://github.com/pytorch/android-demo-app/tree/master/ObjectDetection and they say if there's any issue with running the script or using the model, try git reset --hard cd35a009ba964331abccd30f6fa0614224105d39. That's the reason I tried to reset my yolov5 project on git bash here

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests