Shengqiong Wu, Hao Fei*, Leigang Qu, Wei Ji, and Tat-Seng Chua. (*Correspondence )
NExT++, School of Computing, National University of Singapore
This repository hosts the code, data and model weight of NExT-GPT, the first end-to-end MM-LLM that perceives input and generates output in arbitrary combinations (any-to-any) of text, image, video, and audio and beyond.
- [2023.09.15] 🚀🚀 Release the code of NExT-GPT in version
7b_tiva_v0
.
- Release checkpoints (projection layers).
- Release MosIT data.
- Updating NExT-GPT in more types&sizes of LLMs.
- Empowering NExT-GPT with more modalities of inputs&outputs.
- ...
Here, we showcase one example generated from NExT-GPT. For more examples, kindly visit the webpage, or the online live demo.
example_5_Trim.mp4
example_6_Trim.mp4
example_9_Trim.mp4
NExt-GPT is built on top of existing pre-trained LLM, multimodal encoder and SoTA diffusion models, with sufficient end-to-end instruction tuning.
- Multimodal Encoding Stage. Leveraging established encoders to encode inputs in various modalities, where these representations are projected into language-like representations comprehensible to the LLM through a projection layer.
- LLM Understanding and Reasoning Stage. Harnessing an existing open-sourced LLM as the core to process input information for semantic understanding and reasoning. The LLM not only directly generates text tokens but also produces unique “modality signal” tokens that serve as instructions to dictate the decoding layers whether & what modal content to output correspondingly.
- Multimodal Generation Stage. Receiving the multimodal signals with specific instructions from LLM (if any), the Transformer-based output projection layers map the signal token representations into the ones that are understandable to following multimodal decoders.
For more technical details, kindly refer to the paper.
- 1. Code Structure
- 2. Environment Preparation
- 3. Training/Adapting NExt-GPT on Your Own
- 4. Running NExT-GPT System
├── figures
├── data
│ ├── T-X_pair_data
│ │ ├── audiocap # text-autio pairs data
│ │ │ ├── audios # audio files
│ │ │ └── audiocap.json # the audio captions
│ │ ├── cc3m # text-image paris data
│ │ │ ├── images # image files
│ │ │ └── cc3m.json # the image captions
│ │ └── webvid # text-video pairs data
│ │ │ ├── videos # video files
│ │ │ └── webvid.json # the video captions
│ ├── IT_data # instruction data
│ │ ├── T+X-T_data # text+[image/audio/video] to text instruction data
│ │ │ ├── alpaca # textual instruction data
│ │ │ ├── llava # visual instruction data
│ │ ├── T-T+X # synthesized text to text+[image/audio/video] instruction data
│ │ └── MosIT # Modality-switching Instruction Tuning instruction data
├── code
│ ├── config
│ │ ├── base.yaml # the model configuration
│ │ ├── stage_1.yaml # enc-side alignment training configuration
│ │ ├── stage_2.yaml # dec-side alignment training configuration
│ │ └── stage_3.yaml # instruction-tuning configuration
│ ├── dsconfig
│ │ ├── stage_1.json # deepspeed configuration for enc-side alignment training
│ │ ├── stage_2.json # deepspeed configuration for dec-side alignment training
│ │ └── stage_3.json # deepspeed configuration for instruction-tuning training
│ ├── datast
│ │ ├── base_dataset.py
│ │ ├── cc3m_datast.py # process and load text-image pair dataset
│ │ ├── audiocap_datast.py # process and load text-audio pair dataset
│ │ ├── webvid_dataset.py # process and load text-video pair dataset
│ │ └── instruction_dataset.py # process and load instruction pair dataset
│ ├── model
│ │ ├── ImageBind # the code from ImageBind Model
│ │ ├── common
│ │ ├── anyToImageVideoAudio.py # the main model file
│ │ ├── agent.py
│ │ ├── modeling_llama.py
│ │ ├── custom_ad.py # the audio diffusion
│ │ ├── custom_sd.py # the image diffusion
│ │ ├── custom_vd.py # the video diffusion
│ │ ├── layers.py # the output projection layers
│ │ └── ...
│ ├── scripts
│ │ ├── train.sh # training NExT-GPT script
│ │ └── app.sh # deploying demo script
│ ├── header.py
│ ├── process_embeddings.py # precompute the captions embeddings
│ ├── train.py # training
│ ├── inference.py # inference
│ ├── demo_app.py # deploy Gradio demonstration
│ └── ...
├── ckpt
│ ├── delta_ckpt # tunable NExT-GPT params
│ │ ├── nextgpt
│ │ │ ├── 7b_tiva_v0 # the directory to save the log file
│ │ │ │ ├── log # the logs
│ └── ...
│ ├── pretrained_ckpt # frozen params of pretrained modules
│ │ ├── imagebind_ckpt
│ │ │ ├──huge # version
│ │ │ │ └──imagebind_huge.pth
│ │ ├── vicuna_ckpt
│ │ │ ├── 7b_v0 # version
│ │ │ │ ├── config.json
│ │ │ │ ├── pytorch_model-00001-of-00002.bin
│ │ │ │ ├── tokenizer.model
│ │ │ │ └── ...
├── LICENCE.md
├── README.md
└── requirements.txt
2. Environment Preparation [Back to Top]
Please first clone the repo and install the required environment, which can be done by running the following commands:
conda env create -n nextgpt python=3.8
conda activate nextgpt
# CUDA 11.6
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
git clone https://github.com/NExT-GPT/NExT-GPT.git
cd NExT-GPT
pip install -r requirements.txt
3.1. Preparing Pre-trained Checkpoint [Back to Top]
NExT-GPT is trained based on following excellent existing models. Please follow the instructions to prepare the checkpoints.
ImageBind
is the unified image/video/audio encoder. The pre-trained checkpoint can be downloaded from here with versionhuge
. Afterward, put theimagebind_huge.pth
file at [./ckpt/pretrained_ckpt/imagebind_ckpt/huge].Vicuna
: first prepare the LLaMA by following the instructions [here]. Then put the pre-trained model at [./ckpt/pretrained_ckpt/vicuna_ckpt/].Image Diffusion
is used to generate images. NExT-GPT uses Stable Diffusion with versionv1-5
. (will be automatically downloaded)Audio Diffusion
for producing audio content. NExT-GPT employs AudioLDM with versionl-full
. (will be automatically downloaded)Video Diffusion
for the video generation. We employ ZeroScope with versionv2_576w
. (will be automatically downloaded)
3.2. Preparing Dataset [Back to Top]
Please download the following datasets used for model training:
A) T-X pairs data
CC3M
of text-image pairs, please follow this instruction [here]. Then put the data at [./data/T-X_pair_data/cc3m].WebVid
of text-video pairs, see the [instruction]. The file should be saved at [./data/T-X_pair_data/webvid].AudioCap
of text-audio pairs, see the [instruction]. Save the data in [./data/T-X_pair_data/audiocap].
B) Instruction data
-
T+X-T
LLaVA
of the visual instruction data, download it from here, and then put it at [./data/IT_data/T+X-T_data/llava].Alpaca
of the textual instruction data, download it from here, and then put it at [./data/IT_data/T+X-T_data/alpaca/].VideoChat
, download the video instruction data here, and then put it at [./data/IT_data/T+X-T_data/videochat/].
-
T-X+T
- Run the following commands to construct the data. Please ensure the above
T+X-T
datasets are prepared. Afterward, theT-X+T
fileinstruction_data.json
will be saved at [./data/IT_data/T-T+X_data].cd ./code/dataset/ python instruction_dataset.py
- Run the following commands to construct the data. Please ensure the above
-
MosIT
- Download the file from here, put them in [./data/IT_data/MosIT_data/]. (We are in the process of finalizing the data and handling the copyright issue. Will release later.)
3.3. Precomputing Embeddings [Back to Top]
In decoding-side alignment training, we minimize the distance between the representation of signal tokens and captions. To save costs of time and memory, we precompute the text embeddings for image, audio and video captions using the text encoder within the respective diffusion models.
Please run this command before the following training of NExT-GPT, where the produced embedding
file will be saved at [./data/embed].
cd ./code/
python process_embeddings.py ../data/T-X_pair_data/cc3m/cc3m.json image ../data/embed/ runwayml/stable-diffusion-v1-5
Note of arguments:
- args[1]: path of caption file;
- args[2]: modality, which can be
image
,video
, andaudio
; - args[3]: saving path of embedding file;
- args[4]: corresponding pre-trained diffusion model name.
3.4. Training NExT-GPT [Back to Top]
First of all, please refer to the base configuration file [./code/config/base.yaml] for the basic system setting of overall modules.
Then, the training of NExT-GPT starts with this script:
cd ./code
bash scripts/train.sh
Specifying the command:
deepspeed --include localhost:0 --master_addr 127.0.0.1 --master_port 28459 train.py \
--model nextgpt \
--stage 1\
--dataset cc3m\
--data_path ../data/T-X_pair_data/cc3m/cc3m.json\
--mm_root_path ../data/T-X_pair_data/cc3m/images/\
--embed_path ../data/embed/\
--save_path ../ckpt/delta_ckpt/nextgpt/7b/\
--log_path ../ckpt/delta_ckpt/nextgpt/7b/log/
where the key arguments are:
--include
:localhost:0
indicating the GPT cuda number0
of deepspeed.--stage
: training stage.--dataset
: the dataset name for training model.--data_path
: the data path for the training file.--mm_root_path
: the data path for the image/video/audio file.--embed_path
: the data path for the text embedding file.--save_path
: the directory which saves the trained delta weights. This directory will be automatically created.--log_path
: the directory which saves the log file.
The whole NExT-GPT training involves 3 steps:
-
Step-1: Encoding-side LLM-centric Multimodal Alignment. This stage trains the input projection layer while freezing the ImageBind, LLM, output projection layer.
Just run the above
train.sh
script by setting:--stage 1
--dataset x
, wherex
varies from [cc3m
,webvid
,audiocap
]--data_path ../.../xxx.json
, wherexxx
is the file name of the data in [./data/T-X_pair_data]--mm_root_path .../.../x
,x
varies from [images
,audios
,videos
]
Also refer to the running config file [./code/config/stage_1.yaml] and deepspeed config file [./code/dsconfig/stage_1.yaml] for more step-wise configurations.
-
Step-2: Decoding-side Instruction-following Alignment. This stage trains the output projection layers while freezing the ImageBind, LLM, input projection layers.
Just run the above
train.sh
script by setting:--stage 2
--dataset x
, wherex
varies from [cc3m
,webvid
,audiocap
]--data_path ../.../xxx.json
, wherexxx
is the file name of the data in [./data/T-X_pair_data]--mm_root_path .../.../x
,x
varies from [images
,audios
,videos
]
Also refer to the running config file [./code/config/stage_2.yaml] and deepspeed config file [./code/dsconfig/stage_2.yaml] for more step-wise configurations.
-
Step-3: Instruction Tuning. This stage instruction-tune 1) the LLM via LoRA, 2) input projection layer and 3) output projection layer on the instruction dataset.
Just run the above
train.sh
script by setting:--stage 3
--dataset instruction
--data_path ../.../xxx.json
, wherexxx
is the file name of the data in [./data/IT_data/T+X-T_data] or data in [./data/IT_data/T+X-T_data] or data in [./data/IT_data/MosIT_data]--mm_root_path .../.../x
,x
varies from [images
,audios
,videos
]
Also refer to the running config file [./code/config/stage_3.yaml] and deepspeed config file [./code/dsconfig/stage_3.yaml] for more step-wise configurations.
4. Running NExT-GPT System [Back to Top]
First, loading the pre-trained NExT-GPT system.
-
Step-1: load
Frozen parameters
. Please refer to 3.1 Preparing Pre-trained Checkpoint. -
Step-2: load
Tunable parameters
. Please put the NExT-GPT system in [./ckpt/delta_ckpt/nextgpt/7b_tiva_v0]. You may either 1) use the params trained yourselves, or 2) download our checkpoints from here. (We are still working hard on optimizing the system, and will release the params shortly.)
Upon completion of the checkpoint loading, you can run the demo locally via:
cd ./code
bash scripts/app.sh
Specifying the key arguments as:
--nextgpt_ckpt_path
: the path of pre-trained NExT-GPT params.
For any questions or feedback, feel free to contact Shengqiong Wu and Hao Fei.
If you find NextGPT useful in your research or applications, please kindly cite:
@articles{wu2023nextgpt,
title={NExT-GPT: Any-to-Any Multimodal LLM},
author={Shengqiong Wu and Hao Fei and Leigang Qu and Wei Ji and Tat-Seng Chua},
journal = {CoRR},
volume = {abs/2309.05519},
year={2023}
}
You may refer to related work that serves as foundations for our framework and code repository, Vicuna, ImageBind, Stable Diffusion, AudioLDM, and Zeroscope. We also partially draw inspirations from PandaGPT, VPGTrans, GILL, CoDi, Video-LLaMA, and MiniGPT-4. Thanks for their wonderful works.
This repository is under BSD 3-Clause License. NExT-GPT is a research project intended for non-commercial use only. One must NOT use the code of NExT-GPT for any illegal, harmful, violent, racist, or sexual purposes. One is strictly prohibited from engaging in any activity that will potentially violate these guidelines. Any potential commercial use of this code should be approved by the authors.