PyTroch release of our paper:
Pixel-wise Regression: 3D Hand Pose Estimation via Spatial-form Representation and Differentiable Decoder
Xingyuan Zhang, Fuhai Zhang
If you find this repository useful, please make a reference in your paper:
@ARTICLE{zhang2022srnet,
author={Zhang, Xingyuan and Zhang, Fuhai},
journal={IEEE Transactions on Multimedia},
title={Differentiable Spatial Regression: A Novel Method for 3D Hand Pose Estimation},
year={2022},
volume={24},
number={},
pages={166-176},
doi={10.1109/TMM.2020.3047552}
}
Update: The paper has been acceptted at TMM! Title has changed as suggested by one of the reviewers. Please consider cite the new version. I did not upload the new version to Arxiv since I am not sure if it is allowed. If you know it is ok to do so, please contact me and I am glad to do the update.
conda env create -f env.yml
conda activate pixelwise
All datasets should be placed in ./Data
folder. After placing datasets correctly, run python check_dataset.py --dataset <dataset_name>
to build the data files used to train.
- Download the dataset from website.
- Unzip the files to
./Data
and rename the folder asNYU
.
- Download the dataset from dropbox.
- Unzip the files to
./Data
and rename the folder asMSRA
.
- Download the dataset from here.
- Extract
Training.tar.gz
andTesting.tar.gz
to./Data/ICVL/Training
and./Data/ICVL/Testing
respectively.
- Ask for the permission from the website and download.
- Download center files from github release, and put them in
Data/HAND17/
. - Extract
frame.zip
andimages.zip
to./Data/HAND17/
. Your should end with a folder look like below:
HAND17/
|
|-- hands17_center_train.txt
|
|-- hands17_center_test.txt
|
|-- training/
| |
| |-- images/
| |
| |-- Training_Annotation.txt
|
|-- frame/
| |
| |-- images/
| |
| |-- BoundingBox.txt
Run python train.py --dataset <dataset_name>
, dataset_name
can be chose from NYU
, ICVL
and HAND17
.
For MSRA
dataset, you should run python train_msra.py --subject <subject_id>
.
Run python test.py --dataset <dataset_name>
.
For MSRA
dataset, you should run python test_msra.py --subject <subject_id>
.
Results and pretrained models are available in github release. These pretrained models are under a CC BY 4.0 license.