This is a PyTorch implementation of CAMS.
Install PyTorch and most other packages we use are listed in environment.yml. We use the implementation of MANO hand from manotorch.
This is used for synthesizing and evaluation. Please follow these steps:
- Download the mano_assets.zip
- Place it under the folder
data/
and unzip it, then it should be likedata/mano_assets
This is the dataset we use.
- Download the meta file, eg. pliers_meta.torch
- Place it under the folder
data/meta
and then it should be likedata/meta/pliers_meta.torch
- Optional: We also release several other categories including scissors_meta.torch and bucket_meta.torch. You may edit the
data
attribute according to experiments/pliers/config.yml to run our code on new category.
If you open the meta file, eg. pliers_meta.torch, you will find that every manipulation sequence consists of two keys: data
and cams
. Under the key data
, you will find ground truth data we copied from HOI4D, and the key cams
reserves our generated ground truth CAMS Embedding.
The following script is a demo about how we generate our own CAMS Embedding from mocap data. You can modify and generate your CAMS Embedding from other mocap data (which guarantees that you can find the right contact pairs by simply calculating the sdf between object and hand, otherwise you may need some predefined policies to get the right contact).
cd data/preparation
python -u gen_cams_meta_pliers.py
After finishing data preparation, you can use the following command to start training.
sh experiments/pliers/train.sh [1] [2] [3]
# [1] = GPU IDs you use, eg. 0, 1
# [2] = number of GPUs you use, eg. 2
# [3] = port
After training, you will get some outputs in experiments/pliers/tmp
, use the following command to start synthesizing.
sh synthesizer/run.sh [1] [2]
# [1] = aforementioned output path, eg. experiments/pliers/tmp/val/
# [2] = meta data path, eg. data/meta/pliers_meta.torch
You have finished generation of new manipulation after synthesizing, the results are in experiments/pliers/synth
. You can also step forward and run evaluation metrics using the following command.
sh eval/run.sh [1] [2]
# [1] = final results path, eg. experiments/pliers/synth
# [2] = name of the file saving evaluation result, eg. eval.txt