Skip to content
forked from qingshui/Latte

Latte: Latent Diffusion Transformer for Video Generation.

License

Notifications You must be signed in to change notification settings

tianyan01/Latte

 
 

Repository files navigation

Latte: Latent Diffusion Transformer for Video Generation
Official PyTorch Implementation

This repo contains PyTorch model definitions, pre-trained weights, and training/sampling code for our paper exploring latent diffusion models with transformers (Latte). You can find more visualizations on our project page.

Latte: Latent Diffusion Transformer for Video Generation
Xin Ma, Yaohui Wang*, Xinyuan Chen, Gengyun Jia, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao (*Corresponding Author & Project Lead)
Department of Data Science & AI, Faculty of Information Technology, Monash University
Shanghai Artificial Intelligence Laboratory, Nanjing University of Posts and Telecommunications,
S-Lab, Nanyang Technological University

We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.

The architecture of Latte

This repository contains:

  • 🪐 A simple PyTorch implementation of Latte

  • ⚡️ Pre-trained Latte models trained on FaceForensics, SkyTimelapse, Taichi-HD and UCF101 (256x256). In addition, we also provide the T2V checkpoint (512x512). All checkpoints can be found in here. An updated LatteT2V model is coming soon, stay tuned!

  • 🛸 A Latte training script using PyTorch DDP.

News

  • (🔥 New) Feb. 24, 2024. 💥 We are very grateful that researchers and developers like our work. We will continue to update our LatteT2V model, hoping that our efforts can help the community develop. We create our Latte discord channels for discussions. Coders are welcome to contribute.

Setup

First, download and set up the repo:

git clone https://github.com/maxin-cn/Latte.git
cd Latte

We provide an environment.yml file that can be used to create a Conda environment. If you only want to run pre-trained models locally on CPU, you can remove the cudatoolkit and pytorch-cuda requirements from the file.

conda env create -f environment.yml
conda activate latte

Sampling

You can sample from our pre-trained Latte models with sample.py. Weights for our pre-trained Latte model can be found here. The script has various arguments to adjust sampling steps, change the classifier-free guidance scale, etc. For example, to sample from our model on FaceForensics, you can use:

bash sample/ffs.sh

or if you want to sample hundreds of videos, you can use the following script with Pytorch DDP:

bash sample/ffs_ddp.sh

If you want to try generating videos from text, please download t2v_required_models and run bash sample/t2v.sh.

Training

We provide a training script for Latte in train.py. This script can be used to train class-conditional and unconditional Latte models. To launch Latte (256x256) training with N GPUs on the FaceForensics dataset :

torchrun --nnodes=1 --nproc_per_node=N train.py --config ./configs/ffs/ffs_train.yaml

or If you have a cluster that uses slurm, you can also train Latte's model using the following scripts:

sbatch slurm_scripts/ffs.slurm

We also provide the video-image joint training scripts train_with_img.py. Similar to train.py scripts, this scripts can be also used to train class-conditional and unconditional Latte models. For example, if you wan to train Latte model on the FaceForensics dataset, you can use:

torchrun --nnodes=1 --nproc_per_node=N train_with_img.py --config ./configs/ffs/ffs_img_train.yaml

Contact Us

Yaohui Wang: wangyaohui@pjlab.org.cn Xin Ma: xin.ma1@monash.edu

Citation

If you find this work useful for your research, please consider citing it.

@article{ma2024latte,
  title={Latte: Latent Diffusion Transformer for Video Generation},
  author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
  journal={arXiv preprint arXiv:2401.03048},
  year={2024}
}

Acknowledgments

Latte has been greatly inspired by the following amazing works and teams: DiT and PixArt-α, we thank all the contributors for open-sourcing.

License

The code and model weights are licensed under LICENSE.

About

Latte: Latent Diffusion Transformer for Video Generation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.2%
  • Shell 0.8%