You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, for each image-text or video-text pair, Text2LIVE would learn a GAN and then essentially can generate the pretty output for that specific image.
My question is with respect to generalization. If I learn a GAN model for the Giraffe-StainedGlass pair, would it generalize to other videos of Giraffe, or applicable to only the video on which the GAN is learned? And how is it practical to generate a single GAN for each pair of image/attribute, e.g.,
Bear+Fire
Man+Smoke
Car+Fire
Coffee Cup + Latte art heart
etc.
The text was updated successfully, but these errors were encountered:
So, for each image-text or video-text pair, Text2LIVE would learn a GAN and then essentially can generate the pretty output for that specific image.
My question is with respect to generalization. If I learn a GAN model for the Giraffe-StainedGlass pair, would it generalize to other videos of Giraffe, or applicable to only the video on which the GAN is learned? And how is it practical to generate a single GAN for each pair of image/attribute, e.g.,
Bear+Fire
Man+Smoke
Car+Fire
Coffee Cup + Latte art heart
etc.
The text was updated successfully, but these errors were encountered: