-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about network: FastSAM is only to train a YOLOV8-seg? #206
Comments
Yeah, I have the same doubt as you do in this regard. The text_prompt only load 'ViT-B/32' weights and don't take any finetune. So, I guass the FastSAM.pt is just that the yolov8-seg.pt. But I try to train yolov8-seg with my datasets. When I use it in FastSAM,I have encountered a problem. There is no complete tutorial for developer about how to train a FastSAM model. |
you can get the training code at this link https://github.com/CASIA-IVA-Lab/FastSAM/releases/tag/v0.0.2 |
I use it to train my model, but when I use it in FastSAM code, it always occurs error. |
I solved my problem. It needs to put cfg folder to ultralytics install path |
I can successfully train the model with the above-mentioned training code. And the contents of
Then, running this file we can successfully train the model. ( If you have any more detailed questions, you can contact me at suliangxu@nuaa.edu.cn.) |
Hello @suliangxu, Thank you for opening this issue and your useful comments. Using the training and validation codes released, I have been trying to train the FastSAM on my custom dataset for instance segmentation (with 6 classes). I structured my dataset following the Seems the problem is with the augmentation, so I explicitly disabled augmentation by setting Here's my from ultralytics import YOLO
model = YOLO(model="FastSAM-s.pt")
model.train(
data="sa.yaml",
task='segment',
epochs=3,
augment=False,
batch=8,
imgsz=255,
overlap_mask=False,
save=True,
save_period=5,
project="fastsam",
name="test",
val=False,
) I will appreciate your help in fixing this. Thank you! |
It seems this error occurs because of the |
Thank you for your response @suliangxu , I am working with grayscale images (single channel of with shape |
I changed the hyper-parameter |
Thank you @suliangxu . That solved the augmentation issue. But another error popped up. I think the issue still revolves around the number of color channels. Is it that the FastSAM can't be trained on single-channel images? Any help? |
@suliangxu . Thank you for your help. I fixed the issue already and have been able to train the model on my custom dataset. I had to convert my grayscale images to three-color channel images. Seems the training and validation codes were designed to work explicitly with BGR images. |
@glenn-jocher can you please clarify this? |
Hey, I'm training the yolov8seg model as provided in the latest released train codes. Has anyone trained FASTsam or added the prompt selection part? Please share the codes if possible. |
@BirATMAN Yes, I have. You need to read the paper to understand how FastSAM works. Then, download a model checkpoint (which is basically a yolov8). Follow the training and validation codes provided in the repo to finetune the model on your custom dataset (all-instance prediction). Then use the FastSAM prompt for post processing (prompt-guided prediction). Check the ‘Inference.py’ on the repo on how to do this. Very easy. Hope this helps |
@suliangxu were you able to compute |
Hello, how to set the data set path for running train.py and coco |
Hi,@joshua-atolagbe !I still have some confusion regarding this. I would like to know if the segmentation performance of FastSAM relies on the YOLOv8-seg model. Is the function of the second stage of FastSAM merely to select and output a specific part of the content based on the text prompts? If that's the case, can it be understood that the segmentation accuracy of FastSAM is consistent with that of YOLOv8-seg? |
Hi, can anyone clarify about the annotation format? "(class-index ) (segmentation points) " while some other internet sources mentions Should bbox points are included or not ? |
Hello, I trained the best.pt file by myself, and the result does not come out when fastsam is divided, and there is no information such as the processing speed of a single image, and there is no result after processing, have you encountered this problem, thank you for answering |
@pcycccccc , yes you’re correct |
@YA12SHYAM . It’s (class-index ) (segmentation points) for segmentation (class-index ) (bbox points) for detection |
@zgf1739 , what version of ultralytics are you using? |
The code used for training in FastSam. Is it related to the version that the specific ultralyrics are consistent with the train and validate code |
Consistent use of ultralytics in train and validate code |
Successfully uninstalled ultralytics-8.1.34 |
To fix this, the Ultrytics version should be 8.0.120 in setup.py hope it helps you guys thanks |
您好,我確認使用Ultrytics的版本與setup.py裡面的版本都設8.0.120,但是輸出結果依舊是空白的 |
When I trained FastSAM on coco128-seg dataset, I found that the part that needed training was the YOLOV8-seg model. So FastSam is only to train a YOLOV8-seg and then adding prompting oprations to it?
The text was updated successfully, but these errors were encountered: