-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing problem on cityscapes #150
Comments
@junyanz Ok, I see. Thanks a lot! I use |
Please see the notes from the original pix2pix torch repo (copied below): |
Please see this discussion. |
I want to evaluate cityscapes datasets,because I try to reproduce the fcn score results from pip2pip by using https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix code. |
Hello, I evaluated the results after using the cityscape dataset, and the values obtained are far from those mentioned in the paper. So I would like to ask, is the code given by 'evaluate.py' only used to evaluate the results of label2image? Can I evaluate image2label? After I use RGB image to generate semantic segmentation results, how do I evaluate the segmentation results? Thanks a lot! |
|
@FishYuLi Thanks for your reply.
|
|
@FishYuLi |
@FishYuLi |
@ZhangCZhen |
@FishYuLi
|
Hi, did you re-train the model and get the similar results with them in the paper? Could you provide some details (e.g parameters) for me? Now I cannot reproduce the results using default parameters. Thanks! @FishYuLi |
@CR-Gjx Yes, I got similar results as that in the paper just with the default parameters. I think you may try to set your |
Thanks for your help, I have reproduced the results |
Hello! I have read your paper very carefully and tried to reproduce your experiments on cityscapes dataset (photo to label). I changed the input size from 256 to 128 and use resnet_6blocks for G as stated in the appendix, and trained a model, and evaluated the performance of photo to label generator. You got (0.58, 0.22, 0.16) results for pixel acc, cls acc, and iou. But my best results are (0.51, 0.16, 0.10), which have quiet a large margin. I wondered if there are any other details that I should change. What is your configs for cityscapes? And can the size of batch_size produce an influence on the final results? Also, how did you evaluate the segmentation results of 128x128 generated images? Did you resize the original label image to 128x128 and do the evaluation? Thanks a lot. This work is amazing.
The text was updated successfully, but these errors were encountered: