We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者你好,我利用光流预测模型生成的光流图去约束rife生成的光流图,从结果上来看rife生成的光流图整体不错但是在细节方面不是很准确,我目前采用l1对光流计算损失,请问我可以采用其他什么损失能够提高生成光流图的准确性呢?
The text was updated successfully, but these errors were encountered:
你好;我研究过这个问题,我发现光流预测的生成结果和 rife 生成结果的差异,未必能说明是 rife 好还是光流预测好 光流预测约束了物体内的平滑性,光流边缘和图片边缘的一致性,因此人看起来觉得很对,但未必说明那个光流值在帧合成的角度是对的,可以参考文献 TOFlow: Video Enhancement with Task-Oriented Flow
还有一个更大的问题是,在插帧中,物体的运动带有不确定性,因此产生的光流结果可能是一个平均的结果,也导致问题 https://openreview.net/forum?id=QuFHei1vuE¬eId=u22Yw26krb
Sorry, something went wrong.
No branches or pull requests
作者你好,我利用光流预测模型生成的光流图去约束rife生成的光流图,从结果上来看rife生成的光流图整体不错但是在细节方面不是很准确,我目前采用l1对光流计算损失,请问我可以采用其他什么损失能够提高生成光流图的准确性呢?
The text was updated successfully, but these errors were encountered: