This repository is no longer maintaned.
This is the reimplementation of COMA code in Pytorch. Please follow the licensing rights of the authors if you use the code.
This code is tested on Python3.6, Pytorch versoin 1.3. Requirments can be install by running
pip install -r requirements.txt
Install mesh processing libraries from MPI-IS/mesh. Note that the python3 version of mesh package library is needed for this.
To start the training, follow these steps
-
Download the registered data from the Project Page
-
Generate default config file by running.
python config_parser.py
-
Add following parameters to the default.cfg
- data_dir - path of dataset downloaded in step 1 (e.g. /home/pixelite1201/mesh_raw/)
- checkpoint_dir - path where checkpoints will be stored
- visual_output_dir - if visualize is set to True, then the visual output from evaluation will be stored here.
- checkpoint_file - if provided a checkpoint_file path, the script will load the parameters from the file and continue the training from there.
Note that data_dir and checkpoint_dir can also be provided as command line option and will overwrite config options as follows
python main.py --data_dir 'path to data' --checkpoint_dir 'path to store checkpoint'
-
Run the training by providing the split and split_term as follows
python main.py --split sliced --split_term sliced
python main.py --split expression --split_term bareteeth
python main.py --split identity --split_term FaceTalk_170731_00024_TA
To evaluate on test data you just need to set eval flag to true in default.cfg and provide the path of checkpoint file.
Although when you run the training, the data preprocessing takes place if the data is not already there. But if you want you can prepare the data before running the training as explained below.
-
Download the data from the Project Page
python data.py --split sliced --data_dir PathToData python data.py --split expression --data_dir PathToData python data.py --split identity --data_dir PathToData