Our repository is built on Motion-Diffusion-Model.
Setup conda env:
conda env create -f environment.yml
conda activate s2m
Download dependencies:
bash prepare/download_smpl_files.sh
if you only want to use pre-trained model to generate motion on customized sketches, you can skip this step.
HumanML3D
There are two paths to get the data:
(a) Go the easy way if you just want to generate sketches-to-motion
(b) Get full data to train and evaluate the model.
HumanML3D - Clone HumanML3D, then copy the data dir to our repository:
cd ..
git clone https://github.com/EricGuo5513/HumanML3D.git
unzip ./HumanML3D/HumanML3D/texts.zip -d ./HumanML3D/HumanML3D/
cp -r HumanML3D/HumanML3D HumanMotionGeneration/test_data
cd HumanMotionGeneration
HumanML3D - Follow the instructions in HumanML3D, then copy the result dataset to our repository:
cp -r ../HumanML3D/HumanML3D ./test_data
Sketches
generate sketches for HumanML3D dataset
python -m data_loaders.humanml.utils.plot_train
The sketches will be saved under ./test_data/sketches
Pre-trained model
Put this pre-trained model under ./user_output/fixed_length
First, please put 5 sketches under ./user_input
and name the sketches with '0', '1', '2', '3', '4'
python -m sample.generate_customized --model_path ./user_output/fixed_length/fixed_length.pth --seed 15
python -m train.train_S2M --save_dir save/my_S2M --dataset humanml
python -m model_eval.model_eval