You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've finally got some time to play with this on a GPU today.
I have some issues with the stochastic optimization which you are already are aware of and mention in the demo:
print("NOTE: if you get a bad results for ssi, blame stochastic optimisation and retry...")
print(" The training is done on the same exact image that we infer on, very few pixels...")
print(" Training should be more stable given more data...")
I tried demo2d once and got weird artefacts (that almost looked like QR codes).
I tried again, with different artefacts (stripes).
And again, different artefacts (ghost images).
On the fourth attempt I finally got a result where I thought "hey, this can actually produce useful output similar to what I see mentioned in the paper".
So if you have any further suggestions to increase reproducability and converge to the same minimum that would be great.
I should heed the advice and try and train on more pixels, but I need to dive a bit deeper into the code for that.
Maybe that will be enough solve the issue.
But maybe there are some other strategies that can help not getting these artefacts?
Maybe some regularizer, some changes to the cost function, perhaps something that punishes the appearence of spikes in the FFT (e.g. unwanted periodic patterns that I have seen in many runs) ?
Here are the results of several different runs all on the same demo image with exactly the same parameters:
Maybe there is something I am doing wrong.
I checked whether there is any variation in the artificially generated noisy input image, but it was always the same starting point.
The text was updated successfully, but these errors were encountered:
Hi, I've finally got some time to play with this on a GPU today.
I have some issues with the stochastic optimization which you are already are aware of and mention in the demo:
I tried
demo2d
once and got weird artefacts (that almost looked like QR codes).I tried again, with different artefacts (stripes).
And again, different artefacts (ghost images).
On the fourth attempt I finally got a result where I thought "hey, this can actually produce useful output similar to what I see mentioned in the paper".
So if you have any further suggestions to increase reproducability and converge to the same minimum that would be great.
I should heed the advice and try and train on more pixels, but I need to dive a bit deeper into the code for that.
Maybe that will be enough solve the issue.
But maybe there are some other strategies that can help not getting these artefacts?
Maybe some regularizer, some changes to the cost function, perhaps something that punishes the appearence of spikes in the FFT (e.g. unwanted periodic patterns that I have seen in many runs) ?
Here are the results of several different runs all on the same demo image with exactly the same parameters:
Maybe there is something I am doing wrong.
I checked whether there is any variation in the artificially generated noisy input image, but it was always the same starting point.
The text was updated successfully, but these errors were encountered: