Visual Global Localization Based on Deep Neural Netwoks for Self-Driving Cars
Authors: Thiago Gonçalves Cavalcante , Avelino Forechi, Thiago Oliveira-Santos, Alberto F. De Souza and Claudine Badue
If you use DeepVGL in an academic work, please cite:
@INPROCEEDINGS{9533843,
author={Cavalcante, Thiago Gonçalves and Oliveira-Santos, Thiago and De Souza, Alberto F. and Badue, Claudine and Forechi, Avelino},
booktitle={2021 International Joint Conference on Neural Networks (IJCNN)},
title={Visual Global Localization Based on Deep Neural Netwoks for Self-Driving Cars},
volume={},
number={},
pages={1-7},
doi={10.1109/IJCNN52387.2021.9533843},
year={2021}
}
See DeepVGL videos:
DEV version: https://drive.google.com/file/d/1_82n_fol89TteE_VZp-4YPR0AFLKq9Lh/view?usp=sharing
IARA version: https://drive.google.com/file/d/1qOO9e961YI2500WHBjYh5Z0tw7_jhRuS/view?usp=sharing
Volta-da-UFES dataset download link: https://drive.google.com/drive/folders/1tqRKGO3DtW1yreoxYeD9Ssc3Ip8fxaXC?usp=sharing
A Deep neural network approach for Visual Global Localization, running in real time on low-cost GPUs.
- SABGL: https://github.com/LCAD-UFES/SABGL
- WNN-CNN-GL: https://github.com/LCAD-UFES/WNN-CNN-GL
Enable DeepVGL in Carmen.
Remember to copy/generate all necessary files to the module's config folder.
required files:
- table of poses / images.
- network weights.
- network configuration file.
(needs to have python2.7 installed on system)
If you want to train with our logs (Volta-da-UFES), follow to STEP 2.
Define the logs that will be used to train the Darknet(make sure the bumblebee and velodyne folders are present).
They are usually in "/dados".
ls /dados
log_volta_da_ufes-20160825.txt
log_volta_da_ufes-20160825.txt_bumblebee
log_volta_da_ufes-20160825.txt_velodyne
log_volta_da_ufes-20191003.txt
log_volta_da_ufes-20191003.txt_bumblebee
log_volta_da_ufes-20191003.txt_velodyne
(Use 2 or more logs to correct execution of the scripts)
Generate the logs.txt file with:
1 - the absolute path to the logs.
2 - the absolute path to the images target directory.
3 - selected camera.
4 - crop height (to eliminate IARA's car-hood).
5 - log format (1 or 2).
The first will be used to generate base poses, the other to live poses.
ex.:
/dados/log_volta_da_ufes-20191003.txt /dados/ufes/20191003 3 380 1
/dados/log_volta_da_ufes-20160825.txt /dados/ufes/20160825 3 380 1
Generate the pose files associated with the images, here called camerapos_files. They are a preview of datasets, but without any treatment.
For that, I run a playback of each log using the process-ground-truth-generator.ini, with the localize_neural_dataset module turned on.
Edit the process-ground-truth-generator.ini to adjust these lines to your case :
playback gt_log 1 0 ./playback /dados/log_voltadaufes-20160825.txt
exporter gt_generator 1 0 ./localize_neural_dataset -camera_id 3 -output_dir /dados/ufes/20160825 -output_txt /dados/ufes/camerapos-20160825.txt
and run:
cd $CARMEN_HOME/bin
./central &
./proccontrol process-ground-truth-generator.ini
Generate images:
execute the following command to generate the images from each log selected.
./scripts/generate_images.sh
-
UFES's dataset download link: https://drive.google.com/drive/u/1/folders/1tqRKGO3DtW1yreoxYeD9Ssc3Ip8fxaXC
-
Save them at /dados/ufes
Generate the logs.txt file with:
1 - the absolute path to the logs.
2 - the absolute path to the images target directory.
3 - selected camera.
4 - crop height (to eliminate IARA's car-hood).
5 - log format (1 or 2).
The first will be used to generate base poses, the other to live poses.
ex.:
/dados/log_volta_da_ufes-20191003.txt /dados/ufes/20191003 3 380 1
/dados/log_volta_da_ufes-20160825.txt /dados/ufes/20160825 3 380 1
Configure the following parameters on 'script/config.txt':
- image_path="/dados/ufes/" # images target directory from previous steps
- output_path="/dados/ufes_gt/" # outpu directory
- base_offset=5 # spacing between base poses
- live_offset=1 # spacing between live poses
Generate the dataset itself:
./scripts/dataset.sh