Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting _pickle.UnpicklingError: invalid load key, '$'. error while accessing the val.py after tflite export #4708

Closed
jaskiratsingh2000 opened this issue Sep 8, 2021 · 14 comments · Fixed by #4711
Labels
question Further information is requested

Comments

@jaskiratsingh2000
Copy link

Hi when I am trying to run the validation test that is checking the mAP value on the tflite weights I got, I am getting this following error. Do you have any idea about that?

Command ran:

python3 val.py --data coco128.yaml --weights yolov5s-fp16.tflite

^[[B^[[B^[[B^[[B^[[B^[[B^[[B^[[Bval: data=./data/coco128.yaml, weights=['yolov5s-fp16.tflite'], batch_size=32, im│gsz=640, conf_thres=0.001, iou_thres=0.6, task=val, device=, single_cls=False, augment=False, verbose=False, save│······························_txt=False, save_hybrid=False, save_conf=False, save_json=False, project=runs/val, name=exp, exist_ok=False, half│······························=False │······························YOLOv5 🚀 v5.0-408-g2317f86 torch 1.9.0a0+gitd69c22d CPU │······························ │······························Traceback (most recent call last): │······························ File "val.py", line 354, in │······························ main(opt) │······························ File "val.py", line 329, in main │······························ run(**vars(opt)) │······························ File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 28, in decorate_context │······························ return func(*args, **kwargs) │······························ File "val.py", line 119, in run │······························ model = attempt_load(weights, map_location=device) # load FP32 model │······························ File "/home/pi/Desktop/yolov5/models/experimental.py", line 94, in attempt_load │······························ ckpt = torch.load(attempt_download(w), map_location=map_location) # load │······························ File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 608, in load │······························ return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) │······························ File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 777, in _legacy_load │······························ magic_number = pickle_module.load(f, **pickle_load_args) │······························_pickle.UnpicklingError: invalid load key, '$'.

which is the unpickling error. Do you have any idea about this? @glenn-jocher @zldrobit Your response would be highly appreciable. Thanks!

@jaskiratsingh2000 jaskiratsingh2000 added the question Further information is requested label Sep 8, 2021
@glenn-jocher
Copy link
Member

@jaskiratsingh2000 val.py only works with PyTorch models.

@jaskiratsingh2000
Copy link
Author

@glenn-jocher what if I want to measure the accuracy performance that is for tf.py model. How can I do that please let me know?

@glenn-jocher glenn-jocher linked a pull request Sep 8, 2021 that will close this issue
@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 8, 2021

@jaskiratsingh2000 good news 😃! Your original issue may now be fixed ✅ in PR #4711. This PR does not bring TFLite inference to val.py, but it does run checks on the weights you pass to train/val/detect to make sure that they are of the correct type, with more informative error messages if you, for example, try to pass a TFLite model to val.py.

To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 to answer your other question there is no off the shelf solution for mAP on YOLOv5 TFLite models at the moment, but they do work directly for inference with detect.py today:

python detect.py --weights yolov5s.tflite

@jaskiratsingh2000
Copy link
Author

My main goal is to know how much accuracy does it give on YOLOv5 TFlite models? How can I do that? Does using the detect.py give something? Let me know please @glenn-jocher

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 well you could customize val.py for this purpose, or yes you could look at qualitative results from detect.py.

@jaskiratsingh2000
Copy link
Author

jaskiratsingh2000 commented Sep 8, 2021

@glenn-jocher What do you mean by qualitative results here and how can I do comparisons of them?

@zldrobit Can we have a script to measure the Accuracy Performance for TFlite models?

@glenn-jocher
Copy link
Member

@jaskiratsingh2000 qualitative is the opposite of quantitative.

@jaskiratsingh2000
Copy link
Author

jaskiratsingh2000 commented Sep 8, 2021 via email

@jaskiratsingh2000
Copy link
Author

@zldrobit Is it possible to write the script for mAP value of tflite. Can you come up with that?

@zldrobit
Copy link
Contributor

@jaskiratsingh2000 It's a good idea to support TFLite in val.py so we could examine the performance in detail after conversion. You could try adding TFLite inference code in val.py.
@glenn-jocher I see some code in in detect.py and val.py is replicated, e.g. in run(). Maybe we could refactor both files to support validations of PyTorch, ONNX, and TensorFlow/TFLite.

@jaskiratsingh2000
Copy link
Author

@zldrobit I actually tried this but couldn't able to do that. That is why I approached you asking if you can help in writing that script?
Please let me knoiw. That would be of great help.

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 10, 2021

@zldrobit yes detect.py and val.py appear to perform similar tasks, but they are also quite different. It may be possible to merge these two functionalities into a single file, though this would be a serious undertaking. We also have PyTorch Hub inference, which operates with a built-in dataloader. These 3 were developed at different times and so are not as unified as would be optimal.

There are a few key differences in their purpose, default settings, dataloaders, etc. val.py is designed to obtain the best mAP on a data.yaml val dataset, and detect.py is designed for best real-world inference results from a large variety of sources. A few important aspects of each:

val.py

  • dataloader LoadImagesAndLabels(): designed to load train, val, test dataset images and labels. Augmentation capable but disabled.

    yolov5/val.py

    Lines 145 to 146 in e88e8f7

    dataloader = create_dataloader(data[task], imgsz, batch_size, gs, single_cls, pad=0.5, rect=True,
    prefix=colorstr(f'{task}: '))[0]
  • image size: 640
  • rectangular inference: True
  • confidence threshold: 0.001
  • iou threshold: 0.6
  • multi-label: True
  • padding: 0.5 * maximum stride

detect.py

  • dataloaders (multiple): designed for loading multiple types of media (images, videos, globs, directories, streams).

    yolov5/detect.py

    Lines 46 to 53 in fca5e2a

    # Set Dataloader
    vid_path, vid_writer = None, None
    if webcam:
    view_img = check_imshow()
    cudnn.benchmark = True # set True to speed up constant image size inference
    dataset = LoadStreams(source, img_size=imgsz, stride=stride)
    else:
    dataset = LoadImages(source, img_size=imgsz, stride=stride)
  • image size: 640
  • rectangular inference: True
  • confidence threshold: 0.25
  • iou threshold: 0.45
  • multi-label: False
  • padding: None

YOLOv5 PyTorch Hub Inference

models.autoShape() class used for image loading, preprocessing, inference and NMS. For more info see YOLOv5 PyTorch Hub Tutorial

yolov5/models/common.py

Lines 276 to 302 in c5360f6

class AutoShape(nn.Module):
# YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
conf = 0.25 # NMS confidence threshold
iou = 0.45 # NMS IoU threshold
classes = None # (optional list) filter by class
multi_label = False # NMS multiple labels per box
max_det = 1000 # maximum number of detections per image
def __init__(self, model):
super().__init__()
self.model = model.eval()
def autoshape(self):
LOGGER.info('AutoShape already enabled, skipping... ') # model already converted to model.autoshape()
return self
@torch.no_grad()
def forward(self, imgs, size=640, augment=False, profile=False):
# Inference from various sources. For height=640, width=1280, RGB images example inputs are:
# file: imgs = 'data/images/zidane.jpg' # str or PosixPath
# URI: = 'https://ultralytics.com/images/zidane.jpg'
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
# PIL: = Image.open('image.jpg') or ImageGrab.grab() # HWC x(640,1280,3)
# numpy: = np.zeros((640,1280,3)) # HWC
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images

@jaskiratsingh2000
Copy link
Author

@zldrobit Please let me know if you can help in the writing a script to evaluate the tf models? cc @glenn-jocher

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants