-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting _pickle.UnpicklingError: invalid load key, '$'. error while accessing the val.py after tflite export #4708
Comments
@jaskiratsingh2000 val.py only works with PyTorch models. |
@glenn-jocher what if I want to measure the accuracy performance that is for tf.py model. How can I do that please let me know? |
@jaskiratsingh2000 good news 😃! Your original issue may now be fixed ✅ in PR #4711. This PR does not bring TFLite inference to val.py, but it does run checks on the weights you pass to train/val/detect to make sure that they are of the correct type, with more informative error messages if you, for example, try to pass a TFLite model to val.py. To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@jaskiratsingh2000 to answer your other question there is no off the shelf solution for mAP on YOLOv5 TFLite models at the moment, but they do work directly for inference with detect.py today:
|
My main goal is to know how much accuracy does it give on YOLOv5 TFlite models? How can I do that? Does using the detect.py give something? Let me know please @glenn-jocher |
@jaskiratsingh2000 well you could customize val.py for this purpose, or yes you could look at qualitative results from detect.py. |
@glenn-jocher What do you mean by qualitative results here and how can I do comparisons of them? @zldrobit Can we have a script to measure the Accuracy Performance for TFlite models? |
@jaskiratsingh2000 qualitative is the opposite of quantitative. |
But do you have any idea that if I want to get the mAP value for tflite
model what all changes would I be required to do in a script? And where is
the script for the mAP function?
…On Wed, 8 Sep 2021, 10:34 pm Glenn Jocher, ***@***.***> wrote:
@jaskiratsingh2000 <https://github.com/jaskiratsingh2000> qualitative is
the opposite of quantitative.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#4708 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACOBHK6LYEJR7GFYPGLNGDLUA6JSFANCNFSM5DUL7NCA>
.
|
@zldrobit Is it possible to write the script for mAP value of tflite. Can you come up with that? |
@jaskiratsingh2000 It's a good idea to support TFLite in |
@zldrobit I actually tried this but couldn't able to do that. That is why I approached you asking if you can help in writing that script? |
@zldrobit yes detect.py and val.py appear to perform similar tasks, but they are also quite different. It may be possible to merge these two functionalities into a single file, though this would be a serious undertaking. We also have PyTorch Hub inference, which operates with a built-in dataloader. These 3 were developed at different times and so are not as unified as would be optimal. There are a few key differences in their purpose, default settings, dataloaders, etc. val.py is designed to obtain the best mAP on a data.yaml val dataset, and detect.py is designed for best real-world inference results from a large variety of sources. A few important aspects of each:
|
dataloader = create_dataloader(data[task], imgsz, batch_size, gs, single_cls, pad=0.5, rect=True, | |
prefix=colorstr(f'{task}: '))[0] |
640
True
0.001
0.6
True
0.5 * maximum stride
detect.py
- dataloaders (multiple): designed for loading multiple types of media (images, videos, globs, directories, streams).
Lines 46 to 53 in fca5e2a
# Set Dataloader vid_path, vid_writer = None, None if webcam: view_img = check_imshow() cudnn.benchmark = True # set True to speed up constant image size inference dataset = LoadStreams(source, img_size=imgsz, stride=stride) else: dataset = LoadImages(source, img_size=imgsz, stride=stride) - image size:
640
- rectangular inference:
True
- confidence threshold:
0.25
- iou threshold:
0.45
- multi-label:
False
- padding:
None
YOLOv5 PyTorch Hub Inference
models.autoShape()
class used for image loading, preprocessing, inference and NMS. For more info see YOLOv5 PyTorch Hub Tutorial
Lines 276 to 302 in c5360f6
class AutoShape(nn.Module): | |
# YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS | |
conf = 0.25 # NMS confidence threshold | |
iou = 0.45 # NMS IoU threshold | |
classes = None # (optional list) filter by class | |
multi_label = False # NMS multiple labels per box | |
max_det = 1000 # maximum number of detections per image | |
def __init__(self, model): | |
super().__init__() | |
self.model = model.eval() | |
def autoshape(self): | |
LOGGER.info('AutoShape already enabled, skipping... ') # model already converted to model.autoshape() | |
return self | |
@torch.no_grad() | |
def forward(self, imgs, size=640, augment=False, profile=False): | |
# Inference from various sources. For height=640, width=1280, RGB images example inputs are: | |
# file: imgs = 'data/images/zidane.jpg' # str or PosixPath | |
# URI: = 'https://ultralytics.com/images/zidane.jpg' | |
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) | |
# PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3) | |
# numpy: = np.zeros((640,1280,3)) # HWC | |
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) | |
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images | |
- image size:
640
- rectangular inference:
True
- confidence threshold:
0.25
- iou threshold:
0.45
- multi-label:
False
- padding:
None
@zldrobit Please let me know if you can help in the writing a script to evaluate the tf models? cc @glenn-jocher |
Hi when I am trying to run the validation test that is checking the mAP value on the tflite weights I got, I am getting this following error. Do you have any idea about that?
Command ran:
^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[Bval: data=./data/coco128.yaml, weights=['yolov5s-fp16.tflite'], batch_size=32, im│gsz=640, conf_thres=0.001, iou_thres=0.6, task=val, device=, single_cls=False, augment=False, verbose=False, save│······························_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half│······························=False │······························YOLOv5 🚀 v5.0-408-g2317f86 torch 1.9.0a0+gitd69c22d CPU │······························ │······························Traceback (most recent call last): │······························ File "val.py", line 354, in │······························ main(opt) │······························ File "val.py", line 329, in main │······························ run(**vars(opt)) │······························ File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context │······························ return func(*args, **kwargs) │······························ File "val.py", line 119, in run │······························ model = attempt_load(weights, map_location=device) # load FP32 model │······························ File "/home/pi/Desktop/yolov5/models/experimental.py", line 94, in attempt_load │······························ ckpt = torch.load(attempt_download(w), map_location=map_location) # load │······························ File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 608, in load │······························ return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) │······························ File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 777, in _legacy_load │······························ magic_number = pickle_module.load(f, **pickle_load_args) │······························_pickle.UnpicklingError: invalid load key, '$'.
which is the unpickling error. Do you have any idea about this? @glenn-jocher @zldrobit Your response would be highly appreciable. Thanks!
The text was updated successfully, but these errors were encountered: