Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about GenPercept performance #5

Closed
gwang-kim opened this issue Jun 4, 2024 · 1 comment
Closed

Question about GenPercept performance #5

gwang-kim opened this issue Jun 4, 2024 · 1 comment

Comments

@gwang-kim
Copy link

Hi there,

Thank you for your great work on GenPercept! The idea of one-step estimation is very promising.

I noticed that the performance of GenPercept seems to be lower than Marigold and GeoWizard. I'm wondering if this difference is due to:

  • The number of estimation steps (multi-step vs. one-step)?
  • The amount of training data used?
  • A combination of both factors?

Any insights you could provide would be greatly appreciated!

@guangkaixu
Copy link
Collaborator

guangkaixu commented Jun 23, 2024

@gwang-kim Hi, sorry for the late response. It seems that my e-mail didn't notice me this issue in GitHub.

Although we tried to reproduce Marigold by ourselves (the training code of Marigold was not released) , we can hardly reach the performance in their paper. In order to compare fairly, we compare Marigold and GenPercept under the same training setting, and the results can be seen in Table 7 of the main paper.
image

"Stochastic MS Gen." here actually means Marigold trained by ourselves, and the performance is slightly worse than the training paradigm GenPercept in the depth estimation task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants