💡 We also have other video generation projects that may interest you ✨.
Open-Sora Plan: Open-Source Large Video Generation Model
Bin Lin, Yunyang Ge and Xinhua Cheng etc.
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
Shenghai Yuan, Jinfa Huang and Yujun Shi etc.
ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation
Shenghai Yuan, Jinfa Huang and Yongqi Xu etc.
- ⏳⏳⏳ Release the full codes & datasets & weights.
- ⏳⏳⏳ Integrate into Diffusers.
[2024.12.09]
🔥We release the test set and metric calculation code used in the paper, now your can measure the metrics on your own machine. Please refer to this guide for more details.[2024.12.08]
🔥The code for data preprocessing is out, which is used to obtain the training data required by ConsisID. Please refer to this guide for more details.[2024.12.04]
Thanks @shizi for providing 🤗Windows-ConsisID and 🟣Windows-ConsisID, which make it easy to run ConsisID on Windows.[2024.12.01]
🔥 We provide full text prompts corresponding to all the videos on project page. Click here to get and try the demo.[2024.11.30]
🔥 We have fixed the huggingface demo, welcome to try it.[2024.11.29]
🔥 The current codes and weights are our early versions, and the differences with the latest version in arxiv can be viewed here. And we will release the full codes in the next few days.[2024.11.28]
Thanks @camenduru for providing Jupyter Notebook and @Kijai for providing ComfyUI Extension ComfyUI-ConsisIDWrapper. If you find related work, please let us know.[2024.11.27]
🔥 Due to policy restrictions, we only open-source part of the dataset. You can download it by clicking here. And we will release the data processing codes in the next few days.[2024.11.26]
🔥 We release the arXiv paper for ConsisID, and you can click here to see more details.[2024.11.22]
🔥 All codes & datasets are coming soon! Stay tuned 👀!
Identity-Preserving Text-to-Video Generation.
or you can click here to watch the video.
Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by ConsisID. We also provide online demo in Hugging Face Spaces.
python app.py
python infer.py --model_path BestWishYsh/ConsisID-preview
warning: It is worth noting that even if we use the same seed and prompt but we change a machine, the results will be different.
ConsisID has high requirements for prompt quality. You can use GPT-4o to refine the input text prompt, an example is as follows (original prompt: "a man is playing guitar.")
a man is playing guitar.
Change the sentence above to something like this (add some facial changes, even if they are minor. Don't make the sentence too long):
The video features a man standing next to an airplane, engaged in a conversation on his cell phone. he is wearing sunglasses and a black top, and he appears to be talking seriously. The airplane has a green stripe running along its side, and there is a large engine visible behind his. The man seems to be standing near the entrance of the airplane, possibly preparing to board or just having disembarked. The setting suggests that he might be at an airport or a private airfield. The overall atmosphere of the video is professional and focused, with the man's attire and the presence of the airplane indicating a business or travel context.
Some sample prompts are available here.
ConsisID requires about 44 GB of GPU memory to decode 49 frames (6 seconds of video at 8 FPS) with output resolution 720x480 (W x H), which makes it not possible to run on consumer GPUs or free-tier T4 Colab. The following memory optimizations could be used to reduce the memory footprint. For replication, you can refer to this script.
Feature (overlay the previous) | Max Memory Allocated | Max Memory Reserved |
---|---|---|
- | 37 GB | 44 GB |
enable_model_cpu_offload | 22 GB | 25 GB |
enable_sequential_cpu_offload | 16 GB | 22 GB |
vae.enable_slicing | 16 GB | 22 GB |
vae.enable_tiling | 5 GB | 7 GB |
# turn on if you don't have multiple GPUs or enough GPU memory(such as H100)
pipe.enable_model_cpu_offload()
pipe.enable_sequential_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
warning: it will cost more time in inference and may also reduce the quality.
We recommend the requirements as follows.
git clone --depth=1 https://github.com/PKU-YuanGroup/ConsisID.git
cd ConsisID
conda create -n consisid python=3.11.0
conda activate consisid
pip install -r requirements.txt
The weights are available at 🤗HuggingFace and 🟣WiseModel, and will be automatically downloaded when runing app.py
and infer.py
, or you can download it with the following commands.
# way 1
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
cd util
python download_weights.py
# way 2
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --repo-type model \
BestWishYsh/ConsisID-preview \
--local-dir ckpts
# way 3
git lfs install
git clone https://www.wisemodel.cn/SHYuanBest/ConsisID-Preview.git
Once ready, the weights will be organized in this format:
📦 ckpts/
├── 📂 data_process/
├── 📂 face_encoder/
├── 📂 scheduler/
├── 📂 text_encoder/
├── 📂 tokenizer/
├── 📂 transformer/
├── 📂 vae/
├── 📄 configuration.json
├── 📄 model_index.json
Please refer to this guide for how to obtain the training data required by ConsisID. If you want to train a text to image and video generation model. You need to arrange all the dataset in this format:
📦 datasets/
├── 📂 captions/
│ ├── 📄 dataname_1.json
│ ├── 📄 dataname_2.json
├── 📂 dataname_1/
│ ├── 📂 refine_bbox_jsons/
│ ├── 📂 track_masks_data/
│ ├── 📂 videos/
├── 📂 dataname_2/
│ ├── 📂 refine_bbox_jsons/
│ ├── 📂 track_masks_data/
│ ├── 📂 videos/
├── ...
├── 📄 total_train_data.txt
First, setting hyperparameters:
- environment (e.g., cuda): deepspeed_configs
- training arguments (e.g., batchsize): train_single_rank.sh or train_multi_rank.sh
Then, we run the following bash to start training:
# For single rank
bash train_single_rank.sh
# For multi rank
bash train_multi_rank.sh
We found some plugins created by community developers. Thanks for their efforts:
- ComfyUI Extension. ComfyUI-ConsisIDWrapper (by @Kijai).
- Jupyter Notebook. Jupyter-ConsisID (by @camenduru).
- Windows Docker. 🤗Windows-ConsisID and 🟣Windows-ConsisID (by @shizi).
- Diffusers. We need your help to integrate ConsisID into Diffusers. 🙏 [Need your contribution]
If you find related work, please let us know.
We release the subset of the data used to train ConsisID. The dataset is available at HuggingFace, or you can download it with the following command. Some samples can be found on our Project Page.
huggingface-cli download --repo-type dataset \
BestWishYsh/ConsisID-preview-Data \
--local-dir BestWishYsh/ConsisID-preview-Data
We release the data used for evaluation in ConsisID, which is available at HuggingFace. Please refer to this guide for how to evaluate customized model.
- This project wouldn't be possible without the following open-sourced repositories: Open-Sora Plan, CogVideoX, EasyAnimate, CogVideoX-Fun.
- The majority of this project is released under the Apache 2.0 license as found in the LICENSE file.
- The CogVideoX-5B model (Transformers module) is released under the CogVideoX LICENSE.
- The service is a research preview. Please contact us if you find any potential violations. (shyuan-cs@hotmail.com)
If you find our paper and codes useful in your research, please consider giving a star ⭐ and citation 📝.
@article{yuan2024identity,
title={Identity-Preserving Text-to-Video Generation by Frequency Decomposition},
author={Yuan, Shenghai and Huang, Jinfa and He, Xianyi and Ge, Yunyuan and Shi, Yujun and Chen, Liuhan and Luo, Jiebo and Yuan, Li},
journal={arXiv preprint arXiv:2411.17440},
year={2024}
}