Skip to content

arclab-hku/DEIO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 

Repository files navigation

DEIO: Deep Event Inertial Odometry

Abstract

Event cameras are bio-inspired, motion-activated sensors that demonstrate great potential in handling challenging situations, such as fast motion and high-dynamic range. Despite their promise, existing event-based simultaneous localization and mapping (SLAM) approaches still face limited performance in real-world applications. On the other hand, state-of-the-art SLAM approaches that incorporate deep neural networks show impressive robustness and applicability. However, there is a lack of research on fusing learning-based event SLAM methods with IMU, which could be indispensable to push the event-based SLAM to large-scale, low-texture or complex scenarios. In this paper, we propose DEIO, the first monocular deep event-inertial odometry framework, which combines learning-based method with traditional nonlinear graph-based optimization. Specifically, we tightly integrate a trainable event-based differentiable bundle adjustment (e-DBA) with the IMU pre-integration in a patch-based co-visibility factor graph that employs keyframe-based sliding window optimization. Numerical Experiments in ten public challenge datasets demonstrate that our method can achieve superior performance compared with the image-based and event-based benchmarks.

Update log

  • README Upload (2024/10/28)
  • Paper Upload (2024/11/06)
  • Estimated Trajectories Upload (2024/11/07)
  • Evaluated Data Upload
  • Training Data Upload
  • Code Upload (will be released once the paper is accepted)
  • Trainable Event Representation

Setup and Installation

Training and Supervision

  • The synthetic event-based TartanAir data is generated using the ESIM simulator. Following Note1 and Note2 for quickly use.

Evaluating DEIO

1. DAVIS240C 1

Download sample sequence from boxes_6dof, poster_translation (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_davis240c.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID

Estimated trajectories against the GT in DAVIS240C

2. Mono-HKU Dataset 2

Download sample sequence from vicon_dark1, vicon_hdr4 (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_mono_hku.py \
--datapath=${YOUR_DATAFOLDER} \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \
--side=davis346

Estimated trajectories against the GT in Mono-HKU Dataset

3. Stereo-HKU Dataset 3

Download sample sequence from aggressive_translation, hdr_agg (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_stereo_hku.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID

Estimated trajectories against the GT in Stereo-HKU Dataset

4. VECtor 4

Download sample sequence from corridors_walk1, units_scooter1 (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_vector.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \

Estimated trajectories against the GT in VECtor

5. TUM-VIE 5

Download sample sequence from mocap-6dof, mocap-desk2 (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_tumvie.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \

Estimated trajectories against the GT in TUM-VIE Dataset

6. UZH-FPV 6

Download sample sequence from indoor_forward_6, indoor_forward_7 (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_uzh_fpv.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \

Estimated trajectories against the GT in UZH-FPV

7. MVSEC 7

Download sample sequence from indoor_flying_1, indoor_flying_3 (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_mvsec.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \

Estimated trajectories against the GT in MVSEC Dataset

8. DSEC 8

Download sample sequence from dsec_zurich_city_04_a, dsec_zurich_city_04_e (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_dsec.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \

Estimated trajectories against the GT in DSEC

9. EDS 9

Download sample sequence from 00_peanuts_dark, 09_ziggy_flying_pieces (ASL format)

Run the DEIO as the following steps:

conda activate deio

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=${YOUR_WORKSPACE} python script/eval_eio/deio_eds.py \
--datapath=${YOUR_DATAFOLDER}  \
--weights=eDBA.pth \
--visual_only=0 \
--evaluate_flag \
--enable_event \
--SCORER_EVAL_USE_GRID \

Estimated trajectories against the GT in EDS

Run on Your Own Dataset

  • Taking ECMD 9 as an example. First download the rosbag file, and then run the following command:
conda activate deio

CUDA_VISIBLE_DEVICES=2 PYTHONPATH=${YOUR_WORKSPACE} python script/pp_data/pp_ecmd.py --indir=${YOUR_DATAFOLDER}
  • Duplicate a script from deio_davis240c.py or deio_ecmd.py
  • In the script, specify the data loading procedure of IMU data and Event loader.
  • Specify the timestamp file and unit for both event streams and IMU.
  • Specify the event camera intrinsics and camera-IMU extrinsics in the script.
  • Try it!

Estimated trajectories of our DEIO against the GNSS-INS-RTK in ECMD

Using Our Results as Comparison

For the convenience of the comparison, we release the estimated trajectories of DEIO in tum format in the dir of estimated_trajectories. What's more, we also give the sample code for the quantitative and qualitative evaluation using evo package

Acknowledgement

  • This work is based on DPVO, DEVO, DROID-SLAM, DBA-Fusion, and GTSAM
  • More details about the trainable event representation is available in
  • If you find this work is helpful in your research, a simple star or citation of our works should be the best affirmation for us. 😊
@article{GWPHKU:DEIO,
  title={DEIO: Deep Event Inertial Odometry},
  author={Guan, Weipeng and Lin, Fuling and Chen, Peiyu and Lu, Peng},
  journal={arXiv preprint arXiv:2411.03928},
  year={2024}
}

About

Learning-based Event-Inertial Odometry

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published