-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running instruction #1
Comments
Hi, sorry for the confusion raised, we tried to make sure the scripts are the same as when we submitted the paper to guarantee every number replicable so that it's a little bit messy. Here are some more clear instructions.
Thank you raising this issue, feel free to let me know how it goes. |
I download your extracted representations and follow your instruction, it works right now. Thank you very much. |
I try to extract the representation of VLCS, but get stuck. Can you check the file PACS/alex_run_baseline.py, line 131 "model.loss"? In alex_cnn_baseline.py, the class MNISTcnn has no loss attribute. |
Hi, I think it was here at Line 188. Sorry that I wasn't clear before, but this was only the base AlexNet used to extract the representations, so the hex_flag should be set to False when running it. (-hex 0) Please try it and feel free to let me know if there are further issues. |
Thanks for your help. |
In our experiments, we used 10 epochs for the representation extraction and 100 epochs for the top layer. On a modern GPU, this process should take roughly 600 seconds. |
Yes, that is what reported in the paper and what I used. The result shocked me. Basically, for each target domain: photo art cartoon sketch, I run alex_run_baseline.py with 10 epochs. The "Best Validated Model Prediction Accuracy" I got are follows: The same situation occurs when I implement on VLCS dataset. |
If these are accuracy, then these results are way much better than what we reported in the paper, and probably even better than the state-of-the-art nowadays. Maybe some double-check suggestions:
Feel free to let me know how it goes. |
These results were produced with learning rate 1e-4, I tried other learning rates such as 5e-5, 1e-5, just a matter of 10 more epochs. I think my training procedure was right: run "base"for each target domain, generated 12x4 .npy files in ./representations/ and then run "top" for each domain. The underlying network still AlexNet. I redo it this morning, the 4 results of "top" were over 95%. The result it's like the testing data is also used for training. |
Yeah, maybe it's also related to how you run it. Basically, every run should generate the same results, if the accuracy gets higher and higher, it looks like the weights are being re-used. |
I run it the same way as you do. Each time, I run "alex_run_baseline" with different target domain order(1: P A C S, 2: S A C P...), then "alex_run_top" for each domain. remove everything in "./representations" after obtaining 4 accuracies. But I got weird accuracies as well:
If the parameters is re-used, the result should be affected by different training order, but it doesn't. Maybe you can provide me your .txt files. Because if we are running the same code, It seems to be the last uncertainty. |
Sure, I just sent you through email. |
Thank you, I found out where the mistake was, and I already extended the experiment on VLCS. |
Thanks for you implementation.
I try to reproduce the result on PACS dataset.
I have put the train/test/val.txt file in sourceonly/(art_painting/c/p/s) file, which is require to do.
I simply try some commonds:
Can you write a instruction that how to reproduce the results in the paper on PACS dataset.
Thank you very much.
The text was updated successfully, but these errors were encountered: