This is the source code for the paper titled - "LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics", [arXiv][RSS 2018 Proceedings]
An example output image showing Keypoint Correspondences:
Flowchart of the proposed approach:
If you find this work useful, please cite it as:
Sourav Garg, Niko Sunderhauf, and Michael Milford. LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics. Proceedings of Robotics: Science and Systems XIV, 2018.
bibtex:
@article{garg2018lost,
title={LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics},
author={Garg, Sourav and Suenderhauf, Niko and Milford, Michael},
journal={Proceedings of Robotics: Science and Systems XIV},
year={2018}
}
RefineNet's citation as mentioned on their Github page.
- Ubuntu (Tested on 14.04)
- RefineNet
- Required primarily for visual semantic information. Convolutional feature maps based dense descriptors are also extracted from the same.
- A modified fork of RefineNet's code is used in this work to simultaneously store convolutional dense descriptors.
- Requires Matlab (Tested on 2017a)
- Python (Tested on 2.7)
- numpy (Tested on 1.11.1, 1.14.2)
- scipy (Tested on 0.13.3, 0.17.1)
- skimage (Minimum Required 0.13.1)
- sklearn (Tested on 0.14.1, 0.19.1)
- h5py (Tested on 2.7.1)
- Docker (optional, recommended, tested on 17.12.0-ce)
- In your workspace, clone the repositories:
NOTE: If you download this repository as a zip, the refineNet's fork will not get downloaded automatically, being a git submodule.
git clone https://github.com/oravus/lostX.git cd lostX git clone https://github.com/oravus/refinenet.git
- Download the Resnet-101 model pre-trained on Cityscapes dataset from here or here. More details on RefineNet's Github page.
- Place the downloaded model's
.mat
file in therefinenet/model_trained/
directory.
- Place the downloaded model's
- If you are using docker, download the docker image:
docker pull souravgarg/vpr-lost-kc:v1
-
Generate and store semantic labels and dense convolutional descriptors from RefineNet's conv5 layer In the MATLAB workspace, from the
refinenet/main/
directory, run:demo_predict_mscale_cityscapes
The above will use the sample dataset from
refinenet/datasets/
directory. You can set path to your data indemo_predict_mscale_cityscapes.m
through variabledatasetName
andimg_data_dir
.
You might have to runvl_compilenn
before running the demo, please refer to the instructions for running refinenet in their official Readme.md -
[For Docker users]
If you have an environment with python and other dependencies installed, skip this step, otherwise run a docker container:docker run -it -v PATH_TO_YOUR_HOME_DIRECTORY/:/workspace/ souravgarg/vpr-lost-kc:v1 /bin/bash
From within the docker container, navigate to
lostX/lost_kc/
repository.
-v
option mounts the PATH_TO_YOUR_HOME_DIRECTORY to /workspace directory within the docker container. -
Reformat and pre-process RefineNet's output from
lostX/lost_kc/
directory:python reformat_data.py -p $PATH_TO_REFINENET_OUTPUT
$PATH_TO_REFINENET_OUTPUT is set to be the parent directory of
predict_result_full
, for example, ../refinenet/cache_data/test_examples_cityscapes/1-s_result_20180427152622_predict_custom_data/predict_result_1/ -
Compute LoST descriptor:
python LoST.py -p $PATH_TO_REFINENET_OUTPUT
-
Repeat step 1, 3, and 4 to generate output for the other dataset by setting the variable
datasetName
to2-s
. -
Perform place matching using LoST descriptors based difference matrix and Keypoint Correspondences:
python match_lost_kc.py -n 10 -f 0 -p1 $PATH_TO_REFINENET_OUTPUT_1 -p2 $PATH_TO_REFINENET_OUTPUT_2
Note: Run python FILENAME -h
for any of the python source files in Step 3, 4, and 6 for description of arguments passed to those files.
The code is released under MIT License.