-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model Ensembling Tutorial #318
Comments
This comment has been minimized.
This comment has been minimized.
6 similar comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Can I use it in version 1? |
what's influence of model ensemble |
compare to test-time augmentation? which one will be better? |
Well, I see, the model ensembling method is actually more like using a poor model to find missed detections for a good model. In contrast, TTA can also find missed detections by changing the input, while maintaining using the best model. |
@Zzh-tju ensembling and TTA are not mutually exclusive. You can TTA a single model, and you can ensemble a group of models with or without TTA:
|
@Zzh-tju ensembling runs multiple models, while TTA tests a single model at with different augmentations. Typically I've seen the best result when merging output grids directly, (i.e. ensembling YOLOv5l and YOLOv5x), rather than simply appending boxes from multiple models for NMS to sort out. This is not always possible however, for example Ensembling an EfficientDet model with YOLOv5x, you can not merge grids, you must use NMS or WBF (or Merge NMS) to get a final result. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
How can I ensemble EfficientDet D7 with YOLO V5x? |
@Blaze-raf97 with the right amount of coffee anything is possible. |
How to solve this problem? |
@LokedSher pycocotools is only intended for mAP on COCO data using coco.yaml. https://pypi.org/project/pycocotools/ |
Thanks for your reply! |
@LokedSher I also encountered the same problem as you, but after I read your Q&A, I still don't know how to improve to get the picture given by the author. |
I want to ensemble two yolov5x6 models trained on the same data with some variation In other words, how do I exactly use that |
@pathikg YOLOv5 ensembling is automatically built into detect.py and val.py, so simply pass two weights:
|
Thanks @glenn-jocher for quick reply but I want to do this in a python script |
I’d follow the code in detect.py and use the Ensemble() module from models/common.py |
You mean from models/experimental.py? |
yes sorry in experimental.py |
@glenn-jocher I really need your help for one of my problems. |
Hi @glenn-jocher , what would be the best practice for deploying an ensemble model like this? I know we can export the individual models for different deployment frameworks, but how would I export an ensemble? |
Hello, I read a previous discussion about ensembling multiple trained networks by simply passing more than 1 weight file during inference. My question is what technique is being used for fusing the predictions? Is it majority voting on the bounding boxes? Some weighted averaging? |
I ended up predicting each image with all of the ensemble members separately, and then combining the bounding box predictions together and doing a second stage NMS to generate a final combined prediction. Seems to work OK. Another possibility is to average the weights of the ensemble members into an averaged model like they do in federated learning, but I have not properly evaluated that method yet. |
@michael-mayo that's a good approach! Typically, the ensembling technique involves averaging the model weights and biases instead of the predictions. Here, you are combining the predictions which can be achieved using the NMS algorithm. Keep in mind that it's important to experiment and choose the best method based on the specific problem and the performance of each approach. |
I should also add that for each ensemble member I trained using a different global random seed, and a different (5/6) subset of the training data, to improve ensemble diversity. |
@michael-mayo That's a great technique to improve ensemble diversity. It can help reducing the chances of overfitting (which can happen if all ensemble members are trained on exactly the same data) and increase the robustness of the final predictions. |
Hi @glenn-jocher I have a question: Suppose we train an object detection model separately on two completely separate datasets (datasets A and B) so that the classes are the same in both datasets (for example usask and avarlis in Wheat Head Detection). |
Hi @mek651 this can be done straightfowardly by loading each model, getting the state_dict for each model (which is a sequence of arrays or tensors I believe), doing a straightforward average of the two state_dicts, then deep copying one the models and assigning the averaged state_dict and saving it. In yolo this is only going to make sense if the classes are exactly the same though. You might get better results by training one larger model on both datasets at the same time though. |
Thanks @michael-mayo. Do you have any idea about this? |
Thanks for the relevant discussion, I am new to YOLOv5 and this platform. I wonder if there is a way to make ensemble training on two different datasets, with one frozen pre-trained model and one from scratch? |
Hello, I did not see the python implementation for detect tasks. Is it same as the predict task?
Thank you. Regards, |
do you solve this?diffrent number of classes will get the error |
Actually in the ensembling technique I want to use the majority voting one the in this inference or test what type of aggregration is used and how can we use the majority voting python val.py --weights yolov5x.pt yolov5l6.pt --data coco.yaml --img 640 --half |
what kind of specific resembling technique is used when we're doing the ensembling with this python val.py --weights yolov5x.pt yolov5l6.pt --data coco.yaml --img 640 --half |
📚 This guide explains how to use YOLOv5 🚀 model ensembling during testing and inference for improved mAP and Recall. UPDATED 25 September 2022.
From https://www.sciencedirect.com/topics/computer-science/ensemble-modeling:
Before You Start
Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.
Test Normally
Before ensembling we want to establish the baseline performance of a single model. This command tests YOLOv5x on COCO val2017 at image size 640 pixels.
yolov5x.pt
is the largest and most accurate model available. Other options areyolov5s.pt
,yolov5m.pt
andyolov5l.pt
, or you own checkpoint from training a custom dataset./weights/best.pt
. For details on all available models please see our README table.Output:
Ensemble Test
Multiple pretraind models may be ensembled togethor at test and inference time by simply appending extra models to the
--weights
argument in any existing val.py or detect.py command. This example tests an ensemble of 2 models togethor:Output:
Ensemble Inference
Append extra models to the
--weights
argument to run ensemble inference:Output:
Environments
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
Status
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
The text was updated successfully, but these errors were encountered: