Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get color results? #3

Open
Xingyb14 opened this issue Aug 29, 2019 · 8 comments
Open

How to get color results? #3

Xingyb14 opened this issue Aug 29, 2019 · 8 comments

Comments

@Xingyb14
Copy link

Hi~
Thanks for your work.
I ran SelfDeblur on my own blur images and get gray results. It seems that the SelfDeblur is performed on Y channel. So can you provide the code coverting the results to RGB?
Looking forward to it, thank you!

@csdwren
Copy link
Owner

csdwren commented Aug 29, 2019

Thanks. The deblurred Y channel is used to replace Y component of blurry image (YCbCr space), which is then converted back to RGB space. You can try it.

I will update it along with a new way to directly deblur RGB images.

@JingyunLiang
Copy link

I didn't find where do you transform the image from RGB space to YCbCr space. In the calculation of loss, out_y ([1,1,128,128]) and y([1,3,128,128]) are directly input into the MSE loss .

total_loss = mse(out_y, y) + tv_loss(out_x)

From the source code of mse, out_y will be broadcasted to be [1,3,128,128].

expanded_input, expanded_target = torch.broadcast_tensors(input, target)
ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))

Therefore, my question is, since the network is not trained with the supervision of Y component of blurry image, how can we expect out_y to be the Y component of deblurred image? @ @ @csdwren

@Uhall
Copy link

Uhall commented Nov 19, 2019

I have the same question.

@csdwren
Copy link
Owner

csdwren commented Feb 22, 2020

https://github.com/csdwren/SelfDeblur/blob/master/selfdeblur_ycbcr.py has been updated to handle color images. Also the code has been improved with better robustness. Thanks.

ZijianDu pushed a commit to ZijianDu/SelfDeblur that referenced this issue Sep 24, 2021
ZijianDu pushed a commit to ZijianDu/SelfDeblur that referenced this issue Sep 24, 2021
@TenMiss
Copy link

TenMiss commented Nov 18, 2021

In your paper, you used the non-blind deconvolution method [14], i.e. "D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors. InNIPS, 2009.". I found that the input image of this non-blind deconvolution method I found on the web must be a grayscale image, and the output image must be a grayscale image, but my final deblurring result is to get the restored color image. This problem has been bothering me for a long time. I am wondering if you could kindly send me the source program and necessary information of this part. I promise they will be used only for research purposed.
Thank you very much for your kind consideration and I am looking forward to your early reply.

@TenMiss
Copy link

TenMiss commented Nov 18, 2021

Thank you for your work. I mainly want to know how to turn the deconvolution result using "D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors. InNIPS, 2009." into a color clear image. Thank you very much !!!

@csdwren
Copy link
Owner

csdwren commented Nov 18, 2021

You can refer to the code 'selfdeblur_ycbcr.py'
An RGB image is converted to YCbCr, and the deconvolution result only process Y channel. Finally re-converted YCbCr to RGB.

@TenMiss
Copy link

TenMiss commented Nov 18, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants