- README Upload (2024/10/28)
- Paper Upload (2024/11/06)
- Estimated Trajectories Upload (2024/11/07)
- Evaluated Data Upload
- Training Data Upload
- Code Upload (will be released once the paper is accepted)
- Trainable Event Representation
- The synthetic event-based TartanAir data is generated using the ESIM simulator. Following Note1 and Note2 for quickly use.
1. DAVIS240C 1
Download sample sequence from boxes_6dof, poster_translation (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_davis240c.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID
2. Mono-HKU Dataset 2
Download sample sequence from vicon_dark1, vicon_hdr4 (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_mono_hku.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
--side=davis346
3. Stereo-HKU Dataset 3
Download sample sequence from aggressive_translation, hdr_agg (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_stereo_hku.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID
4. VECtor 4
Download sample sequence from corridors_walk1, units_scooter1 (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_vector.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
5. TUM-VIE 5
Download sample sequence from mocap-6dof, mocap-desk2 (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_tumvie.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
6. UZH-FPV 6
Download sample sequence from indoor_forward_6, indoor_forward_7 (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_uzh_fpv.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
7. MVSEC 7
Download sample sequence from indoor_flying_1, indoor_flying_3 (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_mvsec.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
8. DSEC 8
Download sample sequence from dsec_zurich_city_04_a, dsec_zurich_city_04_e (ASL format)
Run the DEIO as the following steps:
conda activate deio
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_dsec.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
- Taking ECMD 9 as an example.
First download the
rosbag
file, and then run the following command:
conda activate deio
CUDA_VISIBLE_DEVICES=2 PYTHONPATH=${YOUR_WORKSPACE} python script/pp_data/pp_ecmd.py --indir=${YOUR_DATAFOLDER}
- Duplicate a script from deio_davis240c.py or deio_ecmd.py
- In the script, specify the data loading procedure of IMU data and Event loader.
- Specify the timestamp file and unit for both event streams and IMU.
- Specify the event camera intrinsics and camera-IMU extrinsics in the script.
- Try it!
tum
format in the dir of estimated_trajectories
.
What's more, we also give the sample code for the quantitative and qualitative evaluation using evo package
- This work is based on DPVO, DEVO, DROID-SLAM, DBA-Fusion, and GTSAM
- More details about the trainable event representation is available in
- If you find this work is helpful in your research, a simple star or citation of our works should be the best affirmation for us. 😊
@article{GWPHKU:DEIO,
title={DEIO: Deep Event Inertial Odometry},
author={Guan, Weipeng and Lin, Fuling and Chen, Peiyu and Lu, Peng},
journal={arXiv preprint arXiv:2411.03928},
year={2024}
}