This repository contains instructions on getting the data and code of the work Dense Hand-Object(HO) GraspNet with Full Grasping Taxonomy and Dynamics
presented at ECCV 2024.
Project page : HOGraspNet
HOGraspNet provides the following data and models:
data/source_data
: Full 1920*1080 size RGB & Depth images ("Source_data/Object Pose" is unnecessary data. It will be removed soon.)data/labeling_data
: Json files for annotations.data/extra_data
: Binary hand & object mask data for cropped image. (Bounding box is provided through the dataloader module.)data/source_augmented
: Cropped images around the hand and background augmented RGB images.data/obj_scanned_models
: Manually scanned 3D models for 30 objects utilized in the dataset.
-
This code is tested with PyTorch 2.0.0, 2.3.1 and Python 3.10 on Linux and Windows 11.
-
Clone and install the following main packages.
git clone https://github.com/UVR-WJCHO/HOGraspNet.git cd HOGraspNet pip install -r requirements.txt
-
(TBD, for visualization) Install pytorch3d following here (our code uses version 0.7.3)
-
Set the environment variable for dataset path
(linux) export HOG_DIR=/path/to/HOGGraspNet (windows) set HOG_DIR=/path/to/HOGGraspNet
-
Please fill this form to download the dataset after reading the terms and conditions.
-
Copy the data URL from the form, download it and unzip.
cd assets wget -O urls.zip "[URL]" unzip urls.zip cd ..
After running the above, you should expect:
HOGraspNet/assets/urls/
images.txt: Full RGB & Depth images
annotations.txt: annotations
extra.txt: hand & object segmentation masks(pseudo)
images_augmented.txt: Cropped & background augmented RGB images
Download procedure
-
Download the dataset
-
with default option:
- Cropped/Background augmented Images + Annotations + Masks
- All subject (S1~S99)
- Scanned object 3D models
python scripts/download_data.py
-
or with maual option (example):
python scripts/download_data.py --type [TYPE] --subject [SUBJECT] --objModel [OBJMODEL]
-
-
Unzip them all:
python scripts/unzip_data.py
The raw downloaded data can be found under data/zipped/
. The unzipped data and models can be found under data/
. See visualization.md
for the explanation of how the files can be visualized.
Options
Depending on your usage of the dataset, we suggest different download options.
-
[TYPE] (type: int, default: 0):
- 0 : source_augmented(cropped) + labeling_data + extra_data(mask)
- 1 : 0 + source_data
- 2 : source_augmented
- 3 : labeling_data
- 4 : extra_data
- 5 : source_data
-
[SUBJECT] (type: string, default: all):
- all : subject 1~99
- small : pre-defined 5 subjects
- 1 : subject 1
- 1,2 : subject 1 and 2
- 1-3 : subject 1 to 3
-
[OBJMODEL] (type: bool, default : True):
- True : Download the scanned object 3D models
- False : Skip
Subject info
Here, we provide a summary of each subject's information included in the dataset. HOGraspNet_subject_info
Please check it if you need data on a specific type of subject.
-
Set the environment variable for dataset path
export HOG_DIR=/path/to/HOGGraspNet
-
Utilize the dataloader as below
from scripts.HOG_dataloader import HOGDataset setup = 's2' split = 'test' db_path = os.path.join(os.environ['HOG_DIR'], "data") dataloader = HOGDataset(setup, split, db_path)
-
See
data_structure.md
for detailed structures of the sample from dataloader (WIP)
-
This code is tested with Python 3.10, cuda 11.8, PyTorch 2.0.0 on Linux and Window 11.
-
Set MANO code and models in
thirdparty
from the MANO project page(https://mano.is.tue.mpg.de).- You need to comment out the
print 'FINITO'
(line 137) statement inthirdparty/mano_v1_2/webuser/smpl_handpca_wrapper_HAND_only.py
. - If you set your MANO folder in another path, modify the
mano_path
parameter inscripts/config.py
- You need to comment out the
-
We utilized a differentiable MANO layer for PyTorch from https://github.com/hassony2/manopth.
thirdparty/manopth
-
Set required CUDA, torch, pytorch3d environments.
pip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118 pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py310_cu118_pyt200/download.html
-
Run the code; this will produce rendered results of 10 random samples from downloaded data.
python scripts/visualization.py
-
Rendered images will be saved in
[HOG_DIR]/vis/
.
-
As HOGraspNet has captured on a monotonic environment, background augmentations are available with random images. We utilized samples from HanCo dataset as background sources.
-
Here, we provide example codes for manual background augmentation on source data. This will require
source_augmented
andextra_data
of the dataset. ([TYPE] 2 & 4
in download options) -
Download or manually prepare background sample images in
[HOG_DIR]/data/bg_samples
. -
Run the code; this will produce augmented rgb images in
[HOG_DIR]/data/manual_augmented
.python scripts/manual_augmentation.py
- Update data server protocol as HTTP to HTTPS (24/07/24)
- Support the windows platform
- Update docs
- HALO model annotation (ETA: 24/10)
- Full continuous video sequence (ETA: 24/10)
- HOGraspNet v2 (ETA: 24/10)
- Object pose/contact map quality will be enhanced.
- Images and annotations for articulated objects will be added.
The download and use of the dataset is released for academic research only and it is free to researchers from educational or research institutes for non-commercial purposes. When downloading the dataset you agree to (unless with expressed permission of the authors): not redistribute, modificate, or commercial usage of this dataset in any way or form, either partially or entirely.
If using this dataset, please cite the following paper:
@inproceedings{2024graspnet,
title={Dense Hand-Object(HO) GraspNet with Full Grasping Taxonomy and Dynamics},
author={Cho, Woojin and Lee, Jihyun and Yi, Minjae and Kim, Minje and Woo, Taeyun and Kim, Donghwan and Ha, Taewook and Lee, Hyokeun and Ryu, Je-Hwan and Woo, Woontack and Kim, Tae-Kyun},
booktitle={ECCV},
year={2024}
}
이 연구는 과학기술정보통신부의 재원으로 한국지능정보사회진흥원의 지원을 받아 구축된 "물체 조작 손 동작 3D 데이터"을 활용하여 수행된 연구입니다. 본 연구에 활용된 데이터는 AI 허브(aihub.or.kr)에서 다운로드 받으실 수 있습니다. This research (paper) used datasets from 'The Open AI Dataset Project (AI-Hub, S. Korea)'. All data information can be accessed through 'AI-Hub (www.aihub.or.kr)'.