You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Text-to-video diffusion models enable the generation of high-quality videos given text prompts, making it easy to create diverse and individual content. However, existing approaches mostly focus on short video generation (typically 16 or 24 frames), requiring hard cuts when naively extended to the case of long video synthesis. StreamingT2V, enables autoregressive generation of long videos of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:
A ControlNet-like module which conditions the current generation on frames extracted from the previous chunk, using a cross-attention mechanism to integrate its features into the UNet's skip residual features.
An IP-Adapter-like module which extracts high-level scene and object features from a fixed anchor frame in the first video chunk and is mixed into the prompt embedding features before executing spatial cross-attention.
A SDEdit-based video refinement stage with randomized chunk sampling of overlapped frames per denoising timestep.
Open source status
The model implementation is available.
The model weights are available (Only relevant if addition is not a scheduler).
Hi @dg845! We did indeed plan to support StreamingT2V but other things took priority. Would love to have this if you find time to PR - thanks! I'm familiar with the codebase so would love to be of help in any way
Model/Pipeline/Scheduler description
Text-to-video diffusion models enable the generation of high-quality videos given text prompts, making it easy to create diverse and individual content. However, existing approaches mostly focus on short video generation (typically 16 or 24 frames), requiring hard cuts when naively extended to the case of long video synthesis. StreamingT2V, enables autoregressive generation of long videos of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:
Open source status
Provide useful links for the implementation
The text was updated successfully, but these errors were encountered: