Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image
Haifeng Sun1 Qi Qi1 Jingyu Wang1 Jianxin Liao1*
Our method DIR can achieve an accurate and robust reconstruction of interacting hands.
📖 For more visual results, go checkout our project page
[10/2023] Released the pre-trained models 👏!
[07/2023] DIR is accepted to ICCV 2023 (Oral) 🥳!
If you find our work useful for your research, please consider citing the paper:
@inproceedings{ren2023decoupled,
title={Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image},
author={Ren, Pengfei and Wen, Chao and Zheng, Xiaozheng and Xue, Zhou and Sun, Haifeng and Qi, Qi and Wang, Jingyu and Liao, Jianxin},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year={2023}
}
- Download necessary assets misc.tar.gz and unzip it.
- Download InterHand2.6M dataset and unzip it.
- Process the dataset by the code provided by IntagHand
python dataset/interhand.py --data_path PATH_OF_INTERHAND2.6M --save_path ./data/interhand2.6m/
- Python >= 3.8
- PyTorch >= 1.10
- pytorch3d >= 0.7.0
- scikit-image==0.17.1
- timm==0.6.11
- trimesh==3.9.29
- openmesh==1.1.3
- pymeshlab==2021.7
- chumpy
- einops
- imgaug
- manopth
# create conda env
conda create -n dir python=3.8
# install torch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# install pytorch3d
pip install fvcore iopath
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html
# install other requirements
cd DIR
pip install -r ./requirements.txt
# install manopth
cd manopth
pip install -e .
python train.py
Download the pre-trained models Google Drive
python apps/eval_interhand.py --data_path ./interhand2.6m/ --model ./checkpoint/xxx
You can use different joint id for alignment by setting root_joint (0: Wrist 9:MCP)
Set Wrist=0, you would get following output:
joint mean error:
left: 10.732769034802914 mm, right: 9.722338989377022 mm
all: 10.227554012089968 mm
vert mean error:
left: 10.479239746928215 mm, right: 9.52134095132351 mm
all: 10.000290349125862 mm
pixel joint mean error:
left: 6.329594612121582 mm, right: 5.843323707580566 mm
all: 6.086459159851074 mm
pixel vert mean error:
left: 6.235759735107422 mm, right: 5.768411636352539 mm
all: 6.0020856857299805 mm
root error: 29.26051989197731 mm
(We fixed some minor bugs and the performance is higher than the value reported in the paper)
Distributed under the MIT License. See LICENSE
for more information.
The pytorch implementation of MANO is based on manopth. We use some parts of the great code from IntagHand. We thank the authors for their great job!