Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange output with red noise point #3

Open
AlexCHENSiyu opened this issue Apr 6, 2024 · 6 comments
Open

Strange output with red noise point #3

AlexCHENSiyu opened this issue Apr 6, 2024 · 6 comments

Comments

@AlexCHENSiyu
Copy link

styled_38_0
I can get this good output yesterday.
but today when I ran it again, I get this strange output:
styled_39_1

Have you ever encountered this problem before? Or you can give me some suggestions?

@AlexCHENSiyu
Copy link
Author

ok, probably I find the reason. BTW , this part should be written like this:
image
add ['arr_0'] to it. otherwise cannot run.

@ivanstepanovftw
Copy link
Owner

I had this when I did not clamp styled image:

styled.data = styled.data.clamp_(min_vals[:, None, None], max_vals[:, None, None])

Similar as it is performed in the closure:
styled.data = styled.data.clamp_(min_vals[:, None, None], max_vals[:, None, None])

I actually thinking about adding loss for out of bounds pixels, so the optimizer would not be focused on these pixels.

@AlexCHENSiyu
Copy link
Author

yes, right after I posted the issue, I found this mistake of mine.
I read the paper carefully and I realize that long-term temporal consistency requires heavy-duty computation. It might be hard for me to achieve that.

@AlexCHENSiyu
Copy link
Author

BTW, do you know any comparative standard to measure the quality of style-transferred images between different pre-train model?

@AlexCHENSiyu
Copy link
Author

and any reason why you choose efficientnet-b0 as the best model

@ivanstepanovftw
Copy link
Owner

ivanstepanovftw commented Apr 8, 2024

BTW, do you know any comparative standard to measure the quality of style-transferred images between different pre-train model?

Unfortunately, no.

and any reason why you choose efficientnet-b0 as the best model

I am not sure if it is best for style transfer. EfficientNet chosen as it is efficient, since I do not have local GPU.

Also, in the paper "CNN Filter DB: An Empirical Investigation of Trained Convolutional Filters" (arxiv:2203.15331v2), they say that "learned filters do not significantly differ across models trained for various tasks, except for extreme outliers such as GAN-Discriminators."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants